opensource

Let's run KML presentation tool inside a container

Let's run Presentation Tool inside a Container. I will come over to what the Container is. Assuming you had software packages or projects, you share with your friend...

Let's run KML presentation tool inside a container

# Introduction

Let us run the Presentation Tool inside a Container. I will come over to what the Container is. Assuming you had software packages or projects, you share with your friend. He tried to run it, will encounter issues due to incompatible packages or dependencies or different operating system hosts. I had encountered these same issues back in my high school time while running and testing other Github repositories. As a simple guess, the program did not execute. We need a precise solution, independent of the system where the project runs successfully. Docker comes into the picture and then changes and knocks the DevOps world. Wait for a second, what is a Docker? It sounds like the Ducks elder brother at a pond. No, I was kidding. Docker is a tool that solves the earlier mentioned problem. It helps to create, deploy and run your application inside containers. ’

Oho new term again, what is a Container? You might have heard about containers from DevOps enthusiasts. Containers allow you to combo package up your application with your libraries, necessary dependencies. Eventually, deploy it to any cloud provider platforms or your end machine as one combo package. We will dockerize Liquid Galaxy KML Presentation Tool and learn how it plays the role of shipping it into multi-cloud platforms at production. Before knowing out the KML Presentation Tool, let us learn about what the Liquid Galaxy is. Liquid Galaxy is an open-source project founded by Google, started as a panoramic multi-display Google Earth also used for operations, marketing, research, and many more. KML Presentation Tool gives the user an interface that is capable of creating different sets of data configurations. And it sends to be displayed on a Liquid Galaxy. We will containerize the backend one (Node, MongoDB) and expose port 3000 from the Docker container. I assume that you know basic Linux commands and git essentials.

First of all, clone the Repository. It might even take hours if your internet is not stable. Make sure you have a good internet connection. Sometimes it happens when you are at a dorm, and the internet is not stable. Trust me, I have gone through this experience.

git clone https://github.com/LiquidGalaxyLAB/Presentation-Tool

Let us create three files called .dockerignore, Dockerfile, and docker-compose.yml inside the directory.

cd Presentation-Tool
touch .dockerignore Dockerfile docker-compose.yml

Here, I used cd to change the directory after cloning the repository and used touch to create, change and modify the timestamps of a file.

Note: If you haven’t installed Docker on your machine, you can follow official documentaion or use the below commands if you are using Ubuntu Distro.

sudo apt install docker.io

Let’s copy and paste the below code in corresponding files like we used to do from the StackOverflow when the issues arises. Before we are going to change this whole universe by writing code, you have to remember how to copy [Ctrl +c], cut [Ctrl +x ], and paste [Ctrl +v ]. Trust me, it will save your time a lot and makes you productive.

Running your Node application inside a Container can be preferred as copying your files to the directory and installing packages. If you are building a Node application, you have to create an image to run it in the container. It includes your code, configurations, runtime, and many more.

Inside DockerFile

Dockerfile ``Dockerfile FROM node:latest

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

COPY package.json /usr/src/app

RUN npm install

COPY . /usr/src/app

EXPOSE 3000

CMD [ “npm”, “start” ]


The image includes the Node as runtime environment and npm as package manager. You have to know that each starting point in Dockerfile must begin with `FROM` instruction. You used `RUN` to execute an instruction to create a new directory. Although, Docker creates the working directory by default. I still recommend you set it using WORKDIR. You copy package.json to the respective directory and install dependencies. Finally, expose port 3000 on the container. CMD will run instructions to start the web application.

.dockerignore

.gitignore .env node_modules npm-debug.log


You might know .gitignore but what is this .dockerignore ? Is this similarly work like .gitignore one? Yes, it is, it specifies which files and directories should not be copied inside the container. It might lead to exposing your credentials too. I prefer you to exclude unnecessary large files too that will eventually lead to lower Docker Image size. Eventually, it will speed up the building process of Docker Image.

I am pretty much sure that you are being a hurry to build the image. Let's build using the docker build command.

```bash
docker build -t yourdockerhubname/kmlnodejsimage .

-t flag will allow you to tag the image so you can push your image to the Docker hub. . specifies build context is in the current directory. You built a docker image. Let’s check.

docker images

You can run your image over a container. You can take three flags for it for port -p, background -d, and --namename.

docker run --name kmlnodejsimage -p 80:3000 -d yourdockerhubname/kmlnodejsimage

You can check the status of the container too by using the command below.

docker ps

You can visit it in your browser but you might get errors as we haven’t been configuring the database. Also, we will publish the image to a Docker Hub.

Edit the codebase of database one for connection in the project.

backend/database/index.js

const mongoose = require('mongoose')

//connects to the presentationDB database located in the localhost
module.exports = mongoose.connect(`mongodb://localhost:27017/presentationsDB`, {
  useNewUrlParser: true,
  useUnifiedTopology: true,
})
module.exports = mongoose.connect(`mongodb://mongo:27017/presentationsDB`, {
  useNewUrlParser: true,
  useUnifiedTopology: true,
  useFindAndModify: false,
  useCreateIndex: true,
})

var db = mongoose.connection
db.on('error', console.error.bind(console, 'connection error:'))

Now, let’s define services with Docker Compose. A service is a running container that will define how each container image will execute. Docker-compose allows you to define multiple services to build multi-container applications. For example, you can add caching, backend, database, broker. etc one.

docker-compose.yml

version: '3.7'
services:
  app:
    container_name: backend
    restart: always
    build: .
    ports:
      - '3000:3000'
    depends_on:
      - mongo

  mongo:
    container_name: mongo
    image: mongo
    restart: always
    volumes:
      - ./data:/data/db
    ports:
      - '27017:27017'

Here we defined service as an app and added container as the backend for app service. We instruct the Docker to restart the container if it fails and build the image and mapping the host port to the container port. Similarly, we added another service called mongo and mounted the host directory /data to the container directory /data/db. We linked the backend container to the mongo container. The linkage enables the app service to reach mongo. Let’s run it and spin up two containers.

docker-compose up

# Things we haven’t got the touch on.

  • Publish Docker image to DockerHub
  • Study deep dive to Docker container registry security
  • Deploy it to multi-cloud platforms

# Conclusion

Docker is a beneficial tool that ships your application without giving you burnout. In this blog, you learned basic docker commands and learned to create a container also run multiple containers using docker-compose.

It’s been a long time, I haven’t written a blog. I will continue to write the blog. You can support me by sharing this blog article if you have your colleague who is interested in Docker. I am planning to write the Docker series in my upcoming blogs too. Stay tuned.


On this page


Category under

opensource

Share this post




You might also like

Subscribe to new posts