This blog post does contain the tasks to create a Docker image and upload the image to dockerhub and clean up the image and container on the local machine.Continue reading
I still get the error which is shown in the following image and it seems this error is related to the installed MacOS version 10.15.5.
To be able to work with the Vue.js project, I use now the remote container development functionality in Visual Studio Code as a workaround . This is very close to my blog post “Run a MicroProfile Microservice on OpenLiberty in a Remote development container in Visual Studio Code”.
You can follow the steps of the setup of the workaround for the “OS X 64-bit with Unsupported runtime (83)” problem.
SETUP AND CONFIGURATION
Ensure you have installed Docker Desktop on your local machine.
Step 1: Install the following extensions in Visual Studio Code
Step 3: Start “Remote Containers: Add Development Container Configuration Files …” and select a container as a starting point, here I use the Node 14 container definition (you can customize the “Dockerfile” to your needs). The container configuration in the Dockerfile contains Node.js, npm and yarn , which I need for the Vue.js development.
The gif below shows the steps.
Step 4: Verify the newly created folder “.devcontainer” and related files “devcontainer.json” and “Dockerfile”.
Step 5: I my case I only need to customize the “devcontainer.json” file to expose the port 8080, to access my Vue.js application in a local browser.
Step 6: Now open the local folder with “Remote Containers: Open Folder in container” in the remote development container. That will map the local folder as volume to the remote development container and code changes will be saved on your local file system and start the Vue.js development.
In the gif you see:
- Start “Remote Containers: Open Folder in container”
- Select a folder and open a terminal session in that folder
- Execute “yarn serve” in the terminal session
- See is works the application is running and can be accessed in a local browser using URL “http://localhost:8080″
I hope this was useful for you and let’s see what’s next?
#Docker, #Container, #Vuejs, #VisualStudioCode, #RemoteDevelopment
In this blog post I want to point out a simple topic: How to run a simple PostgreSQL Docker image as a non-productive container in OpenShift? As you maybe know, OpenShift doesn’t allow by default to run container images as root.
The image below shows the result of the simply deployed postgreSQL image from dockerhub.
But, in this blog post we choose an alternative way, where we don’t change the security in OpenShift, here we will customize the postgreSQL Docker image a bit. We will follow the steps to create a postgreSQL database on OpenShift, along the creation of the database called postgreSQL database-articles for the Cloud Native Starter reactive example .
These are the major steps:
- Write the specifications and configurations for:
- Execute the oc CLI commands to:
In this blog post I want to highlight that I just created a GitHub project and a 10 min YouTube video to “How to setup mongoDB in less than 4 min on a free IBM Cloud Kubernetes cluster at a Hackathon”.
My objective is to provide a small guide, how to setup a MongoDB server and Mongo UI (Mongo-Express) on a free IBM Cloud Kubernetes cluster and when you don’t want to use the existing MongoDB service on IBM Cloud.
On the free IBM Cloud Kubernetes cluster: No
persistent volume claimsare used. So, keep in mind, if your Pod in Kubernetes crashes the data of the database is lost.
In other words, your UI application has to access the database with a server application, which also runs on the free Kubernetes cluster (like the Mongo UI (Mongo-Express) in that example here). You should implement a backend for frontend architecture.
The YouTube video shows the setup and a description how it works.
In that blog post I want to point out an awesome topic: “Run a Docker container image as a Cloud Foundry App on IBM Cloud”
The advantage with that approach is: you don’t need to instantiate a Kubernetes or OpenShift cluster. You can just run a single Docker image with your single application on IBM Cloud. That can be useful in different situations where you need to control the contents of your application, and the cloud foundry build-pack mechanism maybe restricts you.
IBM offers to run supports a set of But, by the fact IBM uses
One impact of that situation is, you don’t see the VCAP variables and you can’t use the out of the box binding for IBM Cloud services. You have to manage the bindings to your IBM Cloud services by yourself.
Let’s start with a short guide: How to setup a Cloud Foundry application using a Docker image.
That blog post is structured in:
- Setup and configuration of Visual Studio Code
- Run the Authors microservice in the remote development container
- Debug the Authors microservice in the remote development container
This is a very short blog post about the usage of a Docker container in detached and attached mode. Some times participants in workshops want to reconnect to a docker container, because they closed their terminal session with the container which was in an interactive mode and they want to reconnect to their exiting container image.
Usually, when you use an existing container image to create your own customized configuration, you don’t have deep knowledge how that container image is built, and you have questions like: “What are the folder rights?”, or “What are the installation paths of applications?”, or other information you need to customize the container image to your needs.
You can learn about the existing image, when you visit the GitHub or Dockerhub project of that image (for example: links to GitHub “docker-library / repo-info” and Dockerhub “jenkins repo-info” project of Jenkins). But to ensure that your customization works, you have to run and access the running container in the commandline mode and verify your changes step by step running the image in Kubernetes.
Here are the steps to customize a Jenkins container image I want to run on minikube, you can try it out:
We defined a Dockerfile to create a Docker image for our Cloud-Native-Starter workshop especially for Windows 10 users. The users can now simply create a Docker image on the local Windows 10 machine and then follow the guided steps in the hands-on workshop documentation and use the bash scripts. The reason why we don’t build a Docker image and share the image on Dockerhub is, we want to provide users the freedom of own customizations.
These are some challenges we had during the testing of the Dockerfile definition:
- File sharing for Docker images on Windows
- Docker port forwarding
- Docker in Docker
- Istio Virtual service configuration
- Linux tools missing
Let’s get started with the first step, building a container which contains the Scores-Service using Docker .
This blog post is all about building and running a docker container on a local machine.
Topics you will find in this post: my experience of learning how configure a Dockerfile along with my creation of the Dockerfile for the score-service, including following steps:
- Choosing the base image
- Installing the needed packaging tools
- Defining the source code location and copying the source code into the container
- Configuring a new user and group for Bower , a package manager
- Setting up the Score-Service
- Running the Score-Service container locally
Note: This is not a blueprint, this is just how I did it and I share my experience with you.
Architecture of the Scores-Service
This is the relevant architecture for building the Docker container. The Scores-Service UI and the Scores Core Service will run in the Docker container locally. The Cloudant service still runs on IBM Cloud.