Micro-Stack (ver 1.0)

The Consultation to fully Dockerized Enterprise IT Infrastructure

Microservices architecture is a modern approach to designing enterprise applications by breaking down monolithic applications into smaller, independent services. Each service focuses on a specific business function and can be developed, deployed, and scaled independently. This architecture offers several benefits:

  1. Scalability: Services can be scaled independently based on demand.
  2. Maintainability: Smaller codebases are easier to manage and update.
  3. Resilience: Failure in one service doesn't necessarily affect others.

For example, companies like Netflix and Amazon use microservices to handle their vast and complex systems efficiently. In this project, we will use the Docker Swarm or Kubernetes as the orchestration to implement a microservice based enterprise information system architecture for a dummy example organization. It perfectly fits the small businesses or start-ups to host their cost effective information management systems with very little resources, and able to scale out easily with very small changes.

I. Concept

Microservices architecture is an approach to designing software applications as a collection of small, independent services that communicate over well-defined APIs. Each service is focused on a specific business function and can be developed, deployed, and scaled independently. The typical component architecture of a microservice application stack is illustrated by the attached picture.

Typical Architecture of a Microservice Application

The microservice modules of the application are packed as the container images, to be built, shipped & deployed in three steps (or phases). The overall procedure is illustrated in the attached figure.

App Container Build, Ship & deploy

After container image is developed, its life cycle in the production environment is illustrated by the attached figure. The whole life and presented form is managed by the Docker Management Commands & the directives defined in the configuration files. Where these configuration files are resided are very important factors to impact the high availability setup and orchestration compatibility. We will provide a total solution with this project.

Container Life Cycle in Production Environment

To support above illustrated microservice app architecture, the enterprise infrastructure needs some adjustment to adopt the container based technology and ecosystems to implement the new requirement to the scalability, flexibility and reliability. A typical microservices orchestration is illustrated in the attached picture, which mainly works in container based virtual service stacks.

Enterprise Microservice Architecture Overview

We will develop the supporting code, scripts and steps to implement all needed key elements in this project. The system consists of several relatively independent key areas, we will involve them gradually to make system finally showing up as a scalable, flexible and reliable total solution.

II. Micro Data Center

A. Hardware Stack & Initial Boot up

(if using the cloud resources, skip this step & read the "Virtualization Option" section)

Datacenter Infrastructure

(Temp illustration: Original picture is from WTI website https://www.wti.com/pages/out-of-band-management-solutions-data-centers)

(Temp illustration: Original picture is from Redhat website https://www.redhat.com/en/blog/pxe-boot-uefi)

Service Stack of the Pxe-boot

Along with the OS boot-tech development, the PXE boot are developed in following configuration models: 1) PXE with TFTP: Support Net-boot in local network environment; 2) PXE with HTTP: Support Net-boot in remote network environment; 3) iPXE: Covers all existing capabilities.

ipxe boot to install OS on client machines

pxe-boot-sequence

To support above listed pxe boot models, we have developed a software for PXE server management (user: demo, password: demo). The server is able to boot up the computers from a few to millions in same subnet. It is also an idea tool for AI researchers & developers to their development environments. A client computer booting screenshot is showed down below.

screenshot-of-pxe-client

B. Software Defined Network

Virtual networking is a technology that allows multiple computers, virtual machines (VMs), Containers, devices and services to communicate with each other without needing physical connections like cables. It uses software to create virtual versions of traditional network tools, such as switches and network adapters, which makes it easier to manage and configure networks. We will use the Open-Switch to implement it in this project.

Virtual Network Architecture

In above figure, two virtual switches are illustrated. The switch-1 is configured with the connectivity from/to outside of the cluster, which enables the containers to communicate with the outside works. The switch-2 is not configured any outside connectivity, which is used as an internal network, it supports the communication among the containers securely. The docker container deployment tools can create one or more vNets on each switch to isolate the communications virtually.

C. Container Orchestration

Docker Compose

Docker Compose is a standalone orchestration software running on individual docker host, which is able to configure the related microservices forming feature stack.

Docker Compose

Docker Swarm

Docker orchestration involves managing and coordinating multiple containers to ensure they work together seamlessly. There are two well known containerization technologies in the market, which are Docker based docker container and Linux based LXC container. The LXC container is more proximity to the operation system, so it is more effective and secure. The Docker container is more proximity to a software package, so it has very good flexible and portability, and very easy to develop, ship and deploy a software system. In this project, we will use Docker container to implement our target architecture, but we use the LXC containers and VMs to simulate the hardware resources used in the architecture.

For docker container orchestration, we also face two options --- Docker Swarm vs Kubernetes. Docker Swarm is the cluster mode of the Docker Engine, which comes along with the Docker software package, no extra installation needed to own it, and very easy to configure a small cluster with it. Kubernetes is a widely used by the public cloud providers, which is good for large auto scaling-out systems, and setup process for self-hosting is more complicated than Docker Swarm. For software development and deployment of view, both orchestration systems have no huge differences. As the simplicity, we choose the Docker Swarm as our project orchestration. Here is an overview of the cluster of the Docker Swarm orchestration which we are going to use in this project.

Docker Orchestration
  1. Docker Engine: The core part of Docker, responsible for running containers. It consists of:

    • Docker Daemon: Runs on the host machine and manages Docker containers.

    • Docker Client: The command-line interface that users interact with to communicate with theDocker Daemon.

  2. Orchestration Tools: Tools like Kubernetes and Docker Swarm are used to manage and orchestrate containers across multiple hosts.

    • Kubernetes: An open-source platform that automates deploying, scaling, and operating application containers. It uses a master-worker architecture where the master node manages the cluster and worker nodes run the containers.

    • Docker Swarm: Docker's native clustering and orchestration tool. It uses a manager-worker architecture where the manager node orchestrates and manages the cluster, and worker nodes run the containers.

  3. Ecosystems: The ecosystems are also very important for orchestration system to support features resiliency.

    • Virtual Networking: It allows the engineers to deploy any kind network virtually to devices or services to communicate inside cluster effectively.

    • Hyper Converged Storage: It supports the implementation of the shared storage volumes among all cluster nodes, to coordinate the service resided in different container work together without any conflicts caused the data differences.

    • Virtual Failover Routers: It teams up the resources running the service to work together just like one piece. Whenever a team member failed, other team member will stand out to keep the service alive all the time.

Host Work-flow

Docker Swarm consists of a group of docker host nodes, each node plays a role either manager or worker in the cluster. All docker nodes are docker hosts running the Docker Daemon in it, and uses a client-server architecture to manage and run containers. Here's an overview of its key components and how they interact:

Docker Architecture

Core Components

  1. Docker Engine: The core part of Docker, responsible for building, running, and managing Docker containers. It consists of:
    • Docker Daemon (dockerd): Runs on the host machine and performs the heavy lifting of building, running, and managing containers.
    • Docker Client: The command-line interface (CLI) that users interact with to issue commands to the Docker Daemon.
    • REST API: Allows the Docker Client and other tools to communicate with the Docker Daemon programmatically.
  2. Docker Images: Read-only templates used to create containers. Images are built from a series of layers, each representing a different stage in the build process. They can be pulled from Docker Hub or other registries.
  3. Docker Containers: Lightweight, standalone, and executable packages that include everything needed to run a piece of software, including the code, runtime, libraries, and settings. Containers are instances of Docker images.
  4. Docker Registries: Storage and distribution systems for Docker images. Docker Hub is the default public registry, but private registries can also be set up.
  5. Docker Compose: A tool for defining and running multi-container Docker applications. With a YAML file, you can specify the services, networks, and volumes for your application.

Workflow

  1. Build: Create a Docker image using a Dockerfile, which contains a series of instructions on how to build the image.
  2. Ship: Push the Docker image to a registry (e.g., Docker Hub) from where it can be pulled by other users or systems.
  3. Pull: Grab the Docker image from the repository and make it ready to create the containers on any Docker-enabled host.
  4. Run: Deploy the Docker image as a container on any Docker-enabled host, ensuring consistent behavior across different environments.

D. Hyper Converged Storage

1. Ceph Distributed File System

Swarm & CephFS Coexisted Cluster

Hyper Converged Storage

MicroCeph: Deployment of Simplified Small Ceph

Architecture of MicroCeph

2. Gluster Distributed File System

Swarm & GlusterFS Coexisted Cluster

Hyper Converged Storage

(Todo: 1. Refactor the 1-click deployment script; 2. Document this section)

E. Routing, LB & HA Strategy

Resource Teaming Up

Resource Teaming Up

Service Failover

Service Failover

Example: Keepalived Configuration

(Todo: 1. Refactor the 1-click deployment script; 2. Document this section)

III. Micro Stacks

A. Typical Domains of the Enterprise Information System

Business subsystems are the individual components within a business system that work together to support the overall business objectives. These subsystems can be categorized based on the specific functions they perform. Here's a look at some key business subsystems:

Primary Business Systems
    1. Human Resources Management: Deals with employee recruitment, development, retention, and separation.
    2. Financial Management: Manages the company's finances, including accounting, budgeting, and financial reporting.
    3. Operations Management: Focuses on the production of goods and services, including process optimization, supply chain management, and quality control.
    4. Sales and Marketing: Handles the promotion and selling of products or services, market research, advertising, and customer relationship management.
    5. Information Systems: Supports all other subsystems by providing data management, IT infrastructure, and technology services.
    6. Customer Service: Ensures customer satisfaction through support, service delivery, and handling inquiries or complaints.
    7. R&D (Research and Development): Innovates and develops new products or improves existing ones to maintain competitive advantage.
    8. Procurement: Manages the sourcing, purchasing, and supply of goods and services needed by the business.

    These subsystems must be well-coordinated to ensure efficient and effective operations within the organization. Each subsystem plays a vital role in achieving the overall business goals.

    B. Common Information System

    Enterprise Information systems

    C. Financial, HR and Asset Management

    Financial HR and Asset

    D. Business Operation & Sales Related Subsystems

    There are variety of systems in this scope. We just choose a few typical cases as the examples in this area.

    Business Operational Systems

    IV. Supporting Systems

    A. RD, DevOps & CI/CD Systems (Deprecated)

    B. Analytical & Decision Supporting (Deprecated)

    C. Audit, Monitoring & Notification (Deprecated)

    APPENDIX A: Kubernetes Orchestration (Docker Swarm Alternative)

    (Todo: 1. Refactor the 1-click deployment script; 2. Doc this section)