Coding

Docker Tutorial for Beginners – A Full DevOps Course on How to Run Applications in Containers

  • 00:00:00 [Music]
  • 00:00:03 hello and welcome to the donker for
  • 00:00:05 beginners course my name is simone shot
  • 00:00:07 monolith and i will be your instructor
  • 00:00:09 for this course I'm a DevOps and cloud
  • 00:00:12 trainer at code cloud comm which is an
  • 00:00:15 interactive hands-on online learning
  • 00:00:17 platform I've been working in the
  • 00:00:20 industry as a consultant for over 13
  • 00:00:22 years and have helped hundreds of
  • 00:00:24 thousands of students learn technology
  • 00:00:26 in a fun and interactive way in this
  • 00:00:29 course you will learn docker through a
  • 00:00:31 series of lectures that use animation
  • 00:00:34 illustration and some fun analogies that
  • 00:00:37 simplify complex concepts we have demos
  • 00:00:40 that will show you how to install and
  • 00:00:42 get started with docker and most
  • 00:00:44 importantly we have hands-on labs that
  • 00:00:46 you can access right in your browser I
  • 00:00:49 will explain more about it in a bit but
  • 00:00:51 first let's look at the objectives of
  • 00:00:53 this course in this course we first try
  • 00:00:56 to understand what containers are what
  • 00:00:58 docker is and why you might need it and
  • 00:01:00 what it can do for you we will see how
  • 00:01:03 to run a docker container how to build
  • 00:01:05 your own docker image we will see
  • 00:01:08 networking in docker and how to use
  • 00:01:10 docker compose what docker registry is
  • 00:01:12 how to deploy your own private registry
  • 00:01:14 and we then look at some of these
  • 00:01:16 concepts in debt and we try to
  • 00:01:19 understand how docker really works under
  • 00:01:21 the hood we look at docker for Windows
  • 00:01:23 and Mac before finally getting a basic
  • 00:01:26 introduction to container orchestration
  • 00:01:28 tools like dr. Swann and kubernetes
  • 00:01:31 here's a quick note about hands-on labs
  • 00:01:33 first of all to complete this course you
  • 00:01:36 don't have to set up your own labs well
  • 00:01:38 you may set it up if you wish to if you
  • 00:01:41 wish to have your own environment and we
  • 00:01:43 have a demo as well but as part of this
  • 00:01:45 course we provide real labs that you can
  • 00:01:48 access right in your browser anywhere
  • 00:01:50 anytime and as many times as you want
  • 00:01:53 the labs give you instant access to a
  • 00:01:56 terminal to a docker host and an
  • 00:01:58 accompanying quiz portal the quiz portal
  • 00:02:01 asks a set of questions such as
  • 00:02:03 exploring the environment and gathering
  • 00:02:05 information or you might be asked to
  • 00:02:07 perform an action such as run docker
  • 00:02:09 container the quiz portal then validates
  • 00:02:12 your work and
  • 00:02:13 gives you feedback instantly every
  • 00:02:16 lecture in this course is accompanied by
  • 00:02:18 such challenging interactive quizzes
  • 00:02:20 that makes learning docker a fun
  • 00:02:23 activity so I hope you're asked thrilled
  • 00:02:26 as I am to get started so let us begin
  • 00:02:29 [Music]
  • 00:02:38 we're going to start by looking at a
  • 00:02:40 high-level overview on why you need
  • 00:02:42 docker and what it can do for you let me
  • 00:02:45 start by sharing how I got introduced to
  • 00:02:47 Locker in one of my previous projects I
  • 00:02:50 had this requirement to set up an
  • 00:02:52 end-to-end application stack including
  • 00:02:54 various different technologies like a
  • 00:02:57 web server using node.js and a database
  • 00:02:59 such as MongoDB and a messaging system
  • 00:03:02 like Redis and an orchestration tool
  • 00:03:04 like ansible we had a lot of issues
  • 00:03:06 developing this application stack with
  • 00:03:09 all these different components first of
  • 00:03:11 all their compatibility with the
  • 00:03:13 underlying OS was an issue we had to
  • 00:03:15 ensure that all these different services
  • 00:03:18 were compatible with the version of OS
  • 00:03:21 we were planning to use there have been
  • 00:03:23 times when certain version of these
  • 00:03:25 services were not compatible with the OS
  • 00:03:27 and we've had to go back and look at
  • 00:03:29 different OS that was compatible with
  • 00:03:31 all of these different services secondly
  • 00:03:34 we had to check the compatibility
  • 00:03:36 between these services and the libraries
  • 00:03:38 and dependencies on the OS we've had
  • 00:03:41 issues where one service requires one
  • 00:03:44 version of a dependent library whereas
  • 00:03:46 another service requires another version
  • 00:03:48 the architecture of our application
  • 00:03:51 changed over time we've had to upgrade
  • 00:03:54 to newer versions of these components or
  • 00:03:55 change the database etc and every time
  • 00:03:58 something changed we had to go through
  • 00:04:01 the same process of checking
  • 00:04:03 compatibility between these various
  • 00:04:05 components and the underlying
  • 00:04:07 infrastructure this compatibility matrix
  • 00:04:10 issue is usually referred to as the
  • 00:04:12 matrix from hell next every time we had
  • 00:04:16 a new developer on board we found it
  • 00:04:19 really difficult to set up a new
  • 00:04:21 environment the new developers had to
  • 00:04:23 follow a large set of instructions
  • 00:04:25 around hundreds of commands
  • 00:04:27 if you finally setup their environment
  • 00:04:28 we had to make sure they were using the
  • 00:04:31 right operating system the right
  • 00:04:32 versions of each of these components and
  • 00:04:34 each developer had to set all that up by
  • 00:04:37 himself each time we also had different
  • 00:04:40 development tests and production
  • 00:04:42 environments one developer may be
  • 00:04:44 comfortable using one OS and the others
  • 00:04:46 may be comfortable using another one and
  • 00:04:49 so we couldn't guarantee that the
  • 00:04:51 application that we were building would
  • 00:04:53 run the same way in different
  • 00:04:55 environments and so all of this made our
  • 00:04:57 life in developing building and shipping
  • 00:05:01 the application really difficult so I
  • 00:05:04 needed something that could help us with
  • 00:05:06 the compatibility issue and something
  • 00:05:08 that will allow us to modify or change
  • 00:05:10 these components without affecting the
  • 00:05:13 other components and even modify the
  • 00:05:15 underlying operating systems as required
  • 00:05:17 and that search landed me on docker
  • 00:05:20 with docker I was able to run each
  • 00:05:23 component in a separate container with
  • 00:05:26 its own dependencies and its own
  • 00:05:28 libraries all on the same VM and the OS
  • 00:05:32 but within separate environments or
  • 00:05:34 containers we just had to build the
  • 00:05:37 docker configuration once and all our
  • 00:05:39 developers could now get started with a
  • 00:05:41 simple docker run command a respective
  • 00:05:44 of what the underlying operating system
  • 00:05:45 layer on all they needed to do was to
  • 00:05:48 make sure they had doc are installed on
  • 00:05:50 their systems so what are containers
  • 00:05:52 containers are completely isolated
  • 00:05:55 environments I think they can have their
  • 00:05:57 own processes for services their own
  • 00:06:00 network interfaces
  • 00:06:01 their own mounts just like washing
  • 00:06:03 machines except they all share the same
  • 00:06:05 OS kernel we will look at what that
  • 00:06:08 means in a bit but it's also important
  • 00:06:10 to note that containers are not new with
  • 00:06:12 docker containers have existed for about
  • 00:06:15 10 years now and some of the different
  • 00:06:16 types of containers are LX c LX d LX c
  • 00:06:20 FS etc docker utilizes LX c containers
  • 00:06:23 setting up these continua environments
  • 00:06:25 is hard as they're very low-level and
  • 00:06:28 that is where docker offers a high level
  • 00:06:30 tool with several powerful
  • 00:06:32 functionalities making it really easy
  • 00:06:34 for end-users like us to understand how
  • 00:06:37 docker works let us revisit some base
  • 00:06:40 concepts of operating systems first if
  • 00:06:43 you look at operating systems like
  • 00:06:44 Ubuntu Fedora Susi air scent OS they all
  • 00:06:47 consist of two things an OS kernel and a
  • 00:06:51 set of software the OS kernel is
  • 00:06:54 responsible for interacting with the
  • 00:06:55 underlying hardware while the OS kernel
  • 00:06:58 remains the same which is Linux in this
  • 00:07:00 case it's the software above it that
  • 00:07:02 makes these operating systems different
  • 00:07:04 this software may consist of a different
  • 00:07:07 user interface drivers compilers file
  • 00:07:09 managers developer tools etc so you have
  • 00:07:12 a common Linux kernel shared across all
  • 00:07:15 races and some custom software that
  • 00:07:18 differentiate operating systems from
  • 00:07:20 each other
  • 00:07:20 we said earlier that docker containers
  • 00:07:23 share the underlying kernel so what does
  • 00:07:26 that actually mean
  • 00:07:27 sharing the kernel let's say we have a
  • 00:07:29 system with an Ubuntu OS with docker
  • 00:07:32 installed on it docker can run any
  • 00:07:34 flavor of OS on top of it as long as
  • 00:07:37 they are all based on the same kernel in
  • 00:07:40 this case Linux if the underlying OS is
  • 00:07:43 Ubuntu docker can run a container based
  • 00:07:46 on another distribution like debian
  • 00:07:48 fedora SUSE or Sint OS each docker
  • 00:07:51 container only has the additional
  • 00:07:54 software that we just talked about in
  • 00:07:56 the previous slide that makes these
  • 00:07:57 operating systems different and docker
  • 00:08:00 utilizes the underlying kernel of the
  • 00:08:02 docker host which works with all OSS
  • 00:08:05 above so what is an OS that do not share
  • 00:08:08 the same kernel as this Windows and so
  • 00:08:12 you won't be able to run a Windows based
  • 00:08:14 container on a docker host with Linux on
  • 00:08:16 it for that you will require a docker on
  • 00:08:19 a Windows server now it is when I say
  • 00:08:22 this that most of my students go hey
  • 00:08:25 hold on there that's not true and they
  • 00:08:28 installed our own windows run a
  • 00:08:29 container based on Linux and go see it's
  • 00:08:32 possible well when you install docker on
  • 00:08:35 Windows and run a Linux container on
  • 00:08:37 Windows you're not really running a
  • 00:08:40 Linux container on Windows Windows runs
  • 00:08:42 a Linux container on a Linux virtual
  • 00:08:45 machine under the hood so it's really
  • 00:08:47 Linux container on Linux virtual machine
  • 00:08:50 on Windows we discuss more about this
  • 00:08:53 the docker on Windows or Mac later
  • 00:08:56 during this course now you might ask
  • 00:08:59 isn't that a disadvantage then not being
  • 00:09:01 able to run another kernel on the OS the
  • 00:09:05 answer is no because unlike hypervisors
  • 00:09:07 docker is not meant to virtualize and
  • 00:09:10 run different operating systems and
  • 00:09:11 kernels on the same hardware the main
  • 00:09:14 purpose of docker is to package and
  • 00:09:17 container as applications and to ship
  • 00:09:19 them and to run them anywhere any times
  • 00:09:22 as many times as you want so that brings
  • 00:09:25 us to the differences between virtual
  • 00:09:27 machines and containers something that
  • 00:09:29 we tend to do is specially those from a
  • 00:09:31 virtualization background as you can see
  • 00:09:34 on the right in case of docker we have
  • 00:09:36 the underlying hardware infrastructure
  • 00:09:39 and then the OS and then docker
  • 00:09:41 installed on the OS docker then manages
  • 00:09:44 the containers that run with libraries
  • 00:09:46 and dependencies alone in case of
  • 00:09:48 virtual machines we have the hypervisor
  • 00:09:50 like ESX on the hardware and then the
  • 00:09:54 virtual machines on them as you can see
  • 00:09:56 each virtual machine has its own OS
  • 00:09:58 inside it then the dependencies and then
  • 00:10:01 the application the overhead causes
  • 00:10:04 higher utilization of underlying
  • 00:10:06 resources as there are multiple virtual
  • 00:10:08 operating systems and kernels running
  • 00:10:10 the virtual machines also consume higher
  • 00:10:13 disk space as each VM is heavy and is
  • 00:10:16 usually in gigabytes in size whereas
  • 00:10:18 docker containers are lightweight and
  • 00:10:20 are usually in megabytes in size this
  • 00:10:23 allows docker containers to boot up
  • 00:10:25 faster usually in a matter of seconds
  • 00:10:27 whereas VMs as we know takes minutes to
  • 00:10:30 boot up as it needs to boot up the
  • 00:10:32 entire operating system it is also
  • 00:10:35 important to note that docker has less
  • 00:10:37 isolation as more resources are shared
  • 00:10:39 between the containers like kernel where
  • 00:10:42 as VMs have complete isolation from each
  • 00:10:44 other since VMs don't rely on the
  • 00:10:47 underlying OS or kernel you can run
  • 00:10:49 different types of applications built on
  • 00:10:51 different OSS such as Linux based or
  • 00:10:53 Windows based apps on the same analyzer
  • 00:10:56 so those are some differences between
  • 00:10:58 the two now having said that it's not an
  • 00:11:01 either container or virtual machine
  • 00:11:03 situation its container
  • 00:11:06 and virtual machines now when you have
  • 00:11:08 large environments with thousands of
  • 00:11:10 application containers running on
  • 00:11:11 thousands of docker hosts you will often
  • 00:11:14 see containers provisioned on virtual
  • 00:11:16 docker hosts that way we can utilize the
  • 00:11:20 advantages of both technologies we can
  • 00:11:22 use the benefits of virtualization to
  • 00:11:24 easily provision or decommissioned upper
  • 00:11:26 house stuffs as required at the same
  • 00:11:29 time make use of the benefits of docker
  • 00:11:31 to easily provision applications and
  • 00:11:33 quickly scale them as required but
  • 00:11:36 remember that in this case we will not
  • 00:11:38 be provisioning that many virtual
  • 00:11:41 machines as we used to before because
  • 00:11:44 earlier we provisioned a virtual machine
  • 00:11:47 for each application
  • 00:11:48 now you might provision a virtual
  • 00:11:50 machine for hundreds or thousands of
  • 00:11:52 containers so how is it done there are
  • 00:11:55 lots of containerized versions of
  • 00:11:58 applications readily available as of
  • 00:12:00 today so most organizations have their
  • 00:12:02 products containerized and available in
  • 00:12:04 a public dock or repository called
  • 00:12:07 docker hub or docker store for example
  • 00:12:10 you can find images of most common
  • 00:12:12 operating systems databases and other
  • 00:12:14 services and tools once you identify the
  • 00:12:17 images you need and you install docker
  • 00:12:19 on your hosts bringing up an application
  • 00:12:22 is as easy as running a docker run
  • 00:12:25 command with the name of the image in
  • 00:12:27 this case running a docker run ansible
  • 00:12:29 command will run an instance of ansible
  • 00:12:31 on the docker host similarly run an
  • 00:12:33 instance of MongoDB Redis and nodejs
  • 00:12:37 using the docker run command if we need
  • 00:12:39 to run multiple instances of the web
  • 00:12:41 service simply add as many instances as
  • 00:12:44 you need and configure a load balancer
  • 00:12:46 or some kind in the front in case one of
  • 00:12:49 the instances were to fail simply
  • 00:12:52 destroy that instance and launch anyone
  • 00:12:54 there are other solutions available for
  • 00:12:57 handling such cases that we will look at
  • 00:12:59 later during this course and for now
  • 00:13:02 don't focus too much on the commands and
  • 00:13:04 we will get to that in a bit we've been
  • 00:13:07 talking about images and containers
  • 00:13:10 let's understand the difference between
  • 00:13:12 the two an image is a package or a
  • 00:13:15 template just like a VM template that
  • 00:13:17 you might have worked with in the
  • 00:13:18 virtualization
  • 00:13:19 it is used to create one or more
  • 00:13:21 containers containers are running
  • 00:13:24 instances of images that are isolated
  • 00:13:27 and have their own environments and set
  • 00:13:29 of processes as we have seen before a
  • 00:13:32 lot of products have been derived
  • 00:13:34 already in case you cannot find what
  • 00:13:36 you're looking for you could create your
  • 00:13:38 own image and push it to docker hub
  • 00:13:40 repository making it available for
  • 00:13:42 public so if you look at it
  • 00:13:45 traditionally developers developed
  • 00:13:48 applications then they hand it over to
  • 00:13:50 ops team to deploy and manage it in
  • 00:13:52 production environments they do that by
  • 00:13:55 providing a set of instructions such as
  • 00:13:57 information about how the hosts must be
  • 00:13:59 set up what prerequisites are to be
  • 00:14:01 installed on the host and how the
  • 00:14:03 dependencies are to be configured etc
  • 00:14:05 since the ops team did not really
  • 00:14:08 develop the application on their own
  • 00:14:09 they struggle with setting it up when
  • 00:14:12 the hidden issue they worked with the
  • 00:14:13 developers to resolve it with docker the
  • 00:14:17 developers and operations teams work
  • 00:14:19 hand-in-hand to transform the guide into
  • 00:14:22 a docker file with both of their
  • 00:14:25 requirements this docker file is then
  • 00:14:27 used to create an image for their
  • 00:14:29 applications this image can now run on
  • 00:14:32 any host with docker installed on it and
  • 00:14:34 is guaranteed to run the same way
  • 00:14:36 everywhere so the ops team can now
  • 00:14:39 simply use the image to deploy the
  • 00:14:41 application since the image was already
  • 00:14:43 working when the developer built it and
  • 00:14:45 operations are have not modified it it
  • 00:14:48 continues to work the same way when
  • 00:14:50 deployed in production and that's one
  • 00:14:53 example of how a tool like docker
  • 00:14:55 contributes to the DevOps culture well
  • 00:14:59 that's it for now and in the upcoming
  • 00:15:01 lecture we will look at how to get
  • 00:15:03 started with docker
  • 00:15:04 [Music]
  • 00:15:13 we will now see how to get started with
  • 00:15:16 docker now docker has two editions the
  • 00:15:19 Community Edition and the Enterprise
  • 00:15:21 Edition the Community Edition is the set
  • 00:15:24 of free docker products the Enterprise
  • 00:15:27 Edition is the certified and supported
  • 00:15:29 container platform that comes with
  • 00:15:32 enterprise add-ons like the image
  • 00:15:34 management image security Universal
  • 00:15:37 control plane for managing and
  • 00:15:38 orchestrating container runtimes but of
  • 00:15:41 course these come with a price we will
  • 00:15:43 discuss more about container
  • 00:15:45 orchestration later in this course
  • 00:15:47 and along with some alternatives for now
  • 00:15:50 we will go ahead with the community
  • 00:15:52 edition the community edition is
  • 00:15:55 available on Linux Mac Windows or on
  • 00:15:58 cloud platforms like AWS or Azure in the
  • 00:16:02 upcoming demo we will take a look at how
  • 00:16:05 to install and get started with docker
  • 00:16:07 on a Linux system now if you are on Mac
  • 00:16:11 or Windows you have two options either
  • 00:16:13 install a Linux VM using VirtualBox or
  • 00:16:17 some kind of virtualization platform and
  • 00:16:19 then follow along with the upcoming demo
  • 00:16:21 which is really the most easiest way to
  • 00:16:24 get started with docker the second
  • 00:16:26 option is to install docker Desktop for
  • 00:16:29 Mac or the docker desktop for Windows
  • 00:16:31 which are native applications so if that
  • 00:16:34 is really what you want
  • 00:16:36 check out the docker for Mac and the
  • 00:16:38 windows sections towards the end of this
  • 00:16:41 course and then head back here once you
  • 00:16:43 are all set up we will now head over to
  • 00:16:46 a demo and we will take a look at how to
  • 00:16:49 install docker on a Linux machine
  • 00:16:54 in this demo we look at how to install
  • 00:16:59 and get started with docker first of all
  • 00:17:02 identify a system physical or virtual
  • 00:17:05 machine or laptop that has a supported
  • 00:17:07 operating system in my case I have an
  • 00:17:10 Ubuntu VM go to doctor comm and click on
  • 00:17:16 get docker
  • 00:17:18 you will be taken to the docker engine
  • 00:17:21 Community Edition page that is the free
  • 00:17:24 version that we are after from the
  • 00:17:27 left-hand menu select your system type I
  • 00:17:31 choose Linux in my case and then select
  • 00:17:35 your OS flavor
  • 00:17:36 I choose Ubuntu read through the
  • 00:17:39 prerequisites and requirements your abun
  • 00:17:42 to system must be 64-bit and one of
  • 00:17:45 these supported versions like disco
  • 00:17:48 cosmic Bionic or decennial in my case I
  • 00:17:52 have a bionic version to confirm view
  • 00:17:55 the Etsy release file next uninstall any
  • 00:18:02 older version if one exists so let's
  • 00:18:06 just make sure that there is none on my
  • 00:18:08 host so I'll just copy and paste that
  • 00:18:11 command and I confirm that there are no
  • 00:18:14 older version that exists on my system
  • 00:18:17 the next step is to set up repository
  • 00:18:21 and install the software now there are
  • 00:18:23 two ways to go about this the first is
  • 00:18:26 using the package manager by first
  • 00:18:29 updating the repository using the
  • 00:18:31 apt-get update command then installing
  • 00:18:34 the prerequisite packages and then
  • 00:18:36 adding Dockers of facial GPG keys and
  • 00:18:39 then installing docker but I'm not going
  • 00:18:42 to go that route there is an easier way
  • 00:18:44 if you scroll all the way to the bottom
  • 00:18:47 you will find the instructions to
  • 00:18:49 install docker using the convenience
  • 00:18:51 script it's a script that automates the
  • 00:18:55 entire installation process and works on
  • 00:18:57 most operating systems run the first
  • 00:19:01 command to download a copy of the script
  • 00:19:03 and then run the second command to
  • 00:19:05 execute the script
  • 00:19:07 to install docker automatically give it
  • 00:19:10 a few minutes to complete the
  • 00:19:11 installation
  • 00:19:19 the installation is now
  • 00:19:21 successful that has now checked the
  • 00:19:23 version of docker using the darker
  • 00:19:25 version command we've installed version
  • 00:19:28 19.0 3.1 we will now run a simple
  • 00:19:33 container to ensure everything is
  • 00:19:35 working as expected for this head over
  • 00:19:38 to docker hub at hub docker calm here
  • 00:19:42 you will find a list of the most popular
  • 00:19:44 docker images like nginx MongoDB Alpine
  • 00:19:48 node.js
  • 00:19:50 Redis etc let's search for a fun image
  • 00:19:53 called will see will say is Dockers
  • 00:19:57 version of cows a which is basically a
  • 00:19:59 simple application that friends a cow
  • 00:20:02 saying something in this case it happens
  • 00:20:06 to be a whale copy the docker run
  • 00:20:08 command given here remember to add a
  • 00:20:11 sudo and we will change the message to
  • 00:20:15 hello world
  • 00:20:31 on running this command docker pulls the
  • 00:20:34 image of the whales II application from
  • 00:20:36 docker hub and runs it and we have our
  • 00:20:40 avail saying hello great we're all set
  • 00:20:46 remember for the purpose of this course
  • 00:20:48 you don't really need to setup a dog or
  • 00:20:50 system on your own
  • 00:20:51 we provide hands-on labs that you will
  • 00:20:54 get access to but if you wish to
  • 00:20:56 experiment on your own and follow along
  • 00:20:58 feel free to do so we now look at some
  • 00:21:05 of the docker commands at the end of
  • 00:21:07 this lecture you will go through a
  • 00:21:08 hands-on quiz where you will practice
  • 00:21:11 working with these commands let's start
  • 00:21:13 by looking at docker run command the
  • 00:21:16 docker run command is used to run a
  • 00:21:18 container from an image running the
  • 00:21:20 docker run nginx command will run an
  • 00:21:23 instance of the nginx application from
  • 00:21:25 the docker host if it already exists if
  • 00:21:28 the image is not present on the host it
  • 00:21:31 will go out to docker hub and pull the
  • 00:21:33 image down but this is only done the
  • 00:21:36 first time for the subsequent executions
  • 00:21:38 the same image will be reused the docker
  • 00:21:42 PS command lists all running containers
  • 00:21:45 and some basic information about them
  • 00:21:47 such as the container ID the name of the
  • 00:21:50 image we used to run the containers the
  • 00:21:52 current status and the name of the
  • 00:21:53 container each container automatically
  • 00:21:57 gets a random ID and name created for it
  • 00:22:00 by docker which in this case is silly
  • 00:22:03 Samet to see all containers running or
  • 00:22:06 not use the – a option this outputs all
  • 00:22:10 running as well as previously stopped or
  • 00:22:13 exited containers we'll talk about the
  • 00:22:15 command and port fields shown in this
  • 00:22:17 output later in this course for now
  • 00:22:20 let's just focus on the basic commands
  • 00:22:22 to stop a running container use the
  • 00:22:25 Tucker stop command but you must provide
  • 00:22:27 either the container ID or the continue
  • 00:22:29 name in the stop command if you're not
  • 00:22:32 sure of the name run the docker PS
  • 00:22:34 command to get it on success you will
  • 00:22:37 see the name printed out and running
  • 00:22:38 docker PS again will show no running
  • 00:22:41 containers running docker PS – a
  • 00:22:45 however shows the container silly summit
  • 00:22:47 and that it is in an accident states a
  • 00:22:50 few seconds ago now what if we don't
  • 00:22:54 want this container lying around
  • 00:22:56 consuming space what if we want to get
  • 00:22:58 rid of it for good use the docker RM
  • 00:23:01 command to remove a stopped or exited
  • 00:23:04 container permanently if it prints the
  • 00:23:07 name back we're good run the docker PS
  • 00:23:09 command again to verify that it's no
  • 00:23:12 longer present good but what about the
  • 00:23:15 nginx image that was downloaded at first
  • 00:23:17 we're not using that anymore so how do
  • 00:23:20 we get rid of that image but first how
  • 00:23:23 do we see a list of images present on
  • 00:23:26 our hosts run the docker images command
  • 00:23:29 to see a list of available images and
  • 00:23:31 their sizes on our hosts we have four
  • 00:23:34 images the nginx Redis Ubuntu and Alpine
  • 00:23:37 we will talk about tags later in this
  • 00:23:40 course when we discuss about images to
  • 00:23:43 remove an image that you no longer plan
  • 00:23:46 to use run the docker RM I command
  • 00:23:49 remember you must ensure that no
  • 00:23:52 containers are running off of that image
  • 00:23:54 before attempting to remove the image
  • 00:23:56 you must stop and delete all dependent
  • 00:23:58 containers to be able to delete an image
  • 00:24:01 when we ran the docker run command
  • 00:24:04 earlier it downloaded the Ubuntu image
  • 00:24:07 as it couldn't find one locally what if
  • 00:24:10 we simply want to download the image and
  • 00:24:12 keep so when we run the run docker run
  • 00:24:16 command we don't want to wait for it to
  • 00:24:18 download use the docker pull command to
  • 00:24:21 only pull the image and not run the
  • 00:24:24 container so in this case the docker
  • 00:24:26 pull a bull to come and pulls the Ubuntu
  • 00:24:28 image and stores it on our host let's
  • 00:24:32 look at another example say you were to
  • 00:24:34 run a docker container from an Ubuntu
  • 00:24:36 image when you run the docker run Ubuntu
  • 00:24:39 command it runs an instance of Ubuntu
  • 00:24:42 image and exits immediately if you were
  • 00:24:45 to list the Irani containers you
  • 00:24:47 wouldn't see the container running if
  • 00:24:48 you list all containers including those
  • 00:24:50 that are stopped you will see that the
  • 00:24:52 new container iran is in an exited state
  • 00:24:55 now why is that
  • 00:24:58 unlike virtual machines containers are
  • 00:25:01 not meant to host an operating system
  • 00:25:03 containers are meant to run a specific
  • 00:25:07 task or process such as to host an
  • 00:25:09 instance of a web server or application
  • 00:25:11 server or a database or simply to carry
  • 00:25:14 some kind of computation or analysis
  • 00:25:17 tasks once the task is complete the
  • 00:25:20 container exits a container only lives
  • 00:25:24 as long as the process inside it is
  • 00:25:26 alive if the web service inside the
  • 00:25:29 container is stopped or crash then the
  • 00:25:32 container exits this is why when you run
  • 00:25:35 a container from an Ubuntu image it
  • 00:25:37 stops immediately because a bundu is
  • 00:25:40 just an image of an operating system
  • 00:25:42 that is used as the base image for other
  • 00:25:44 applications there is no process or
  • 00:25:47 application running in it by default if
  • 00:25:50 the image isn't running any service and
  • 00:25:53 as is the case with Ubuntu you could
  • 00:25:55 instruct docker to run a process with
  • 00:25:58 the docker run command for example a
  • 00:26:01 sleep command with a duration of 5
  • 00:26:03 seconds when the container starts it
  • 00:26:06 runs the sleep command and goes into
  • 00:26:08 sleep for 5 seconds post which the sleep
  • 00:26:12 command exits and the container stops
  • 00:26:14 what we just saw was executing a command
  • 00:26:17 when we run the container but what if we
  • 00:26:20 would like to execute a command on a
  • 00:26:21 running container for example when I run
  • 00:26:24 the docker PS command I can see that
  • 00:26:26 there is a running container which uses
  • 00:26:29 the Ubuntu image and sleeps 400 seconds
  • 00:26:32 let's say I would like to see the
  • 00:26:35 contents of a file inside this
  • 00:26:37 particular container I could use the
  • 00:26:39 docker exec command to execute a command
  • 00:26:42 on my docker container in this case to
  • 00:26:45 print the contents of the Etsy hosts
  • 00:26:48 file finally let's look at one more
  • 00:26:52 option before we head over to the
  • 00:26:53 practice exercises I'm now going to run
  • 00:26:56 a docker image I developed for a simple
  • 00:26:59 web application the repository name is
  • 00:27:02 code cloud slash simple web app it runs
  • 00:27:05 a simple web server that listens on port
  • 00:27:07 8080 when you run a docker run
  • 00:27:11 like this it runs in the foreground or
  • 00:27:14 in an attached mode meaning you will be
  • 00:27:17 attached to the console or the standard
  • 00:27:20 out of the docker container and you will
  • 00:27:22 see the output of the web service on
  • 00:27:24 your screen you won't be able to do
  • 00:27:27 anything else on this console other than
  • 00:27:29 view the output until this docker
  • 00:27:30 container stops it won't respond to your
  • 00:27:33 inputs press the ctrl + C combination to
  • 00:27:38 stop the container and the application
  • 00:27:40 hosted on the container exits and you
  • 00:27:43 get back to your prompt another option
  • 00:27:47 is to run the docker container in the D
  • 00:27:49 test mode by providing the dash D option
  • 00:27:53 this will run the docker container in
  • 00:27:56 the background mode and you will be back
  • 00:27:58 to your prompt immediately the container
  • 00:28:01 will continue to run in the backend run
  • 00:28:04 the docker PS command
  • 00:28:05 to view the running container now if you
  • 00:28:08 would like to attach back to the running
  • 00:28:10 container later run the docker attach
  • 00:28:14 command and specify the name or ID of
  • 00:28:16 the docker container now remember if you
  • 00:28:19 are specifying the ID of a container in
  • 00:28:22 any docker command you can simply
  • 00:28:24 provide the first few characters alone
  • 00:28:27 just so it is different from the other
  • 00:28:29 container IDs on the host in this case I
  • 00:28:32 specified a 0 for 3 D now don't worry
  • 00:28:37 about accessing the UI of the webserver
  • 00:28:40 for now we will look more into that in
  • 00:28:42 the upcoming lectures for now let's just
  • 00:28:45 understand the basic commands will not
  • 00:28:47 get our hands dirty with the docker CLI
  • 00:28:50 so let's take a look at how to access
  • 00:28:52 the practice lab environments next
  • 00:28:54 [Music]
  • 00:28:59 let me now walk you through the hands-on
  • 00:29:02 lab practice environment the links to
  • 00:29:06 access the labs associated with this
  • 00:29:08 course are available at cold cloud at
  • 00:29:11 code cloud comm slash P slash docker
  • 00:29:14 dash labs this link is also given in the
  • 00:29:17 description of this video
  • 00:29:19 once you're on this page use the links
  • 00:29:21 given there to access the labs
  • 00:29:23 associated to your lecture each lecture
  • 00:29:27 has its own lab so remember to choose
  • 00:29:30 the right lab for your lecture
  • 00:29:32 the labs open up right in your browser I
  • 00:29:35 would recommend to use google chrome
  • 00:29:38 while working with the labs the
  • 00:29:40 interface consists of two parts a
  • 00:29:42 terminal on the left and a quiz portal
  • 00:29:45 on the right the cooze portal on the
  • 00:29:47 right gives you challenges to solve
  • 00:29:49 follow the quiz and try and answer the
  • 00:29:51 questions asked and complete the tasks
  • 00:29:53 given to you each scenario consists of
  • 00:29:56 anywhere from 10 to 20 questions that
  • 00:29:59 needs to be answered within 30 minutes
  • 00:30:01 to an hour at the top you have the
  • 00:30:03 question numbers below that is the
  • 00:30:05 remaining time for your lab below that
  • 00:30:08 is the question
  • 00:30:09 if you're not able to solve the
  • 00:30:10 challenge look for hints in the hints
  • 00:30:13 section you may skip a question by
  • 00:30:15 hitting the skip button in the top right
  • 00:30:17 corner but remember that you will not be
  • 00:30:19 able to go back to a previous question
  • 00:30:22 once you have skipped if the quiz portal
  • 00:30:25 gets stuck for some reason click on the
  • 00:30:27 quiz portal tab at the top to open the
  • 00:30:31 quiz portal in a separate window the
  • 00:30:37 terminal
  • 00:30:38 gives you access to a real system
  • 00:30:40 running docker you can run any docker
  • 00:30:43 command here and run your own containers
  • 00:30:45 or applications you would typically be
  • 00:30:47 running commands to solve the tasks
  • 00:30:49 assigned in the quiz portal you may play
  • 00:30:52 around and experiment with this
  • 00:30:53 environment but make sure you do that
  • 00:30:55 after you've gone through the quiz so
  • 00:30:57 that your work does not interfere with
  • 00:30:59 the tasks provided by the quiz so let me
  • 00:31:03 walk you through a few questions there
  • 00:31:06 are two types of questions each lab
  • 00:31:08 scenario starts with a set of
  • 00:31:10 exploratory multiple-choice questions
  • 00:31:12 where you're asked to explore and find
  • 00:31:15 information in the given environment and
  • 00:31:17 select the right answer this is to get
  • 00:31:20 you familiarized with a set up you are
  • 00:31:22 then asked to perform tasks like run a
  • 00:31:24 container stop them delete them build
  • 00:31:27 your own image etc here the first
  • 00:31:31 question asks us to find the version of
  • 00:31:33 docker server engine running on the host
  • 00:31:35 run the docker reversion command in the
  • 00:31:38 terminal and identify the right version
  • 00:31:40 then select the appropriate option from
  • 00:31:43 the given choices another example is the
  • 00:31:47 fourth question where it asks you to run
  • 00:31:50 a container using the Redis image if
  • 00:31:54 you're not sure of the command click on
  • 00:31:56 hints and it will show you a hint we now
  • 00:31:59 run in Redis container using the docker
  • 00:32:01 run readies command wait for the
  • 00:32:03 container to run once done click on
  • 00:32:06 check to check your work we have now
  • 00:32:09 successfully completed the task
  • 00:32:11 similarly follow along and complete all
  • 00:32:13 tasks once the lab exercise is completed
  • 00:32:16 remember to leave a feedback and let us
  • 00:32:18 know how it went a few things to note
  • 00:32:21 these are publicly accessible labs that
  • 00:32:24 anyone can access so if you catch
  • 00:32:26 yourself logged out during a peak hour
  • 00:32:29 please wait for some time and try again
  • 00:32:32 also remember to not store any private
  • 00:32:36 or confidential data on these systems
  • 00:32:38 remember that this environment is for
  • 00:32:41 learning purposes only and is only alive
  • 00:32:44 for an hour after which the lab is
  • 00:32:46 destroyed so does all your work but you
  • 00:32:50 missed
  • 00:32:50 / and access these labs as many times as
  • 00:32:53 you want until you feel confident I will
  • 00:32:56 also post solutions to these lab quizzes
  • 00:32:58 so if you run into issues you may refer
  • 00:33:01 to those that's it for now head over to
  • 00:33:04 the first challenge and I will see you
  • 00:33:06 on the other side we will now look at
  • 00:33:17 some of the other docker run commands at
  • 00:33:20 the end of this lecture you will go
  • 00:33:21 through a hands-on quiz where you will
  • 00:33:23 practice working with these commands we
  • 00:33:25 learned that we could use the docker run
  • 00:33:29 Redis command to run the container
  • 00:33:30 running a Redis service in this case the
  • 00:33:33 latest version of Redis which happens to
  • 00:33:35 be 5.0 to 5 as of today but what if we
  • 00:33:39 want to run another version of Redis
  • 00:33:41 like for example and older versions say
  • 00:33:44 4.0 then you specify the version
  • 00:33:47 separated by a colon this is called a
  • 00:33:51 tag in that case docker pulls an image
  • 00:33:55 of the 4.0 version of Redis and runs
  • 00:33:58 that also notice that if you don't
  • 00:34:02 specify any tag as in the first command
  • 00:34:05 docker will consider the default tag to
  • 00:34:07 be latest latest is a tag associated to
  • 00:34:12 the latest version of that software
  • 00:34:13 which is governed by the authors of that
  • 00:34:16 software so as a user how do you find
  • 00:34:20 information about these versions and
  • 00:34:22 what is the latest at docker hub com
  • 00:34:26 look up an image and you will find all
  • 00:34:29 the supported tags in its description
  • 00:34:32 each version of the software can have
  • 00:34:34 multiple short and long tags associated
  • 00:34:37 with it as seen here in this case the
  • 00:34:42 version fight of 0.5 also has the latest
  • 00:34:45 tag on it
  • 00:34:47 let's now look at inputs I have a simple
  • 00:34:51 prompt application that when run asks
  • 00:34:54 for my name and on entering my name
  • 00:34:57 prints a welcome message if I were to
  • 00:35:00 this application and run it as a docker
  • 00:35:03 container like this it wouldn't wait for
  • 00:35:06 the prompt it just prints whatever the
  • 00:35:08 application is supposed to print on
  • 00:35:10 standard out that is because by default
  • 00:35:14 the docker container does not listen to
  • 00:35:17 a standard input even though you are
  • 00:35:19 attached to its console it is not able
  • 00:35:22 to read any input from you it doesn't
  • 00:35:25 have a terminal to read inputs from it
  • 00:35:27 runs in an own interactive mode if you'd
  • 00:35:31 like to provide your input
  • 00:35:33 you must map the standard input of your
  • 00:35:36 host to the docker container using the –
  • 00:35:39 I parameter the – I parameter is for
  • 00:35:41 interactive mode and when I input my
  • 00:35:44 name it prints the expected output but
  • 00:35:48 there is something still missing from
  • 00:35:49 this the prompt when we run the app at
  • 00:35:55 first it asked us for our name but when
  • 00:35:58 docker iced that prompt is missing even
  • 00:36:01 though it seems to have accepted my
  • 00:36:03 input that is because the application
  • 00:36:06 prompt on the terminal and we have not
  • 00:36:09 attached to the containers terminal for
  • 00:36:13 this use the – T option as well the – T
  • 00:36:16 stands for a sudo terminal so with the
  • 00:36:21 combination of – int we are now attached
  • 00:36:24 to the terminal as well as in an
  • 00:36:26 interactive mode on the container we
  • 00:36:29 will now look at port mapping or port
  • 00:36:32 publishing on containers let's go back
  • 00:36:35 to the example where we run a simple web
  • 00:36:37 application in a docker container on my
  • 00:36:39 daugher host remember the underlying
  • 00:36:42 host where docker is installed is called
  • 00:36:44 docker host or docker engine when we run
  • 00:36:47 a containerized web application it runs
  • 00:36:50 and we are able to see that the server
  • 00:36:52 is running but how does a user access my
  • 00:36:55 application as you can see my
  • 00:36:58 application is listening on port 5000 so
  • 00:37:02 I could access my application by using
  • 00:37:03 port 5000 but what IP do I use to access
  • 00:37:08 it from a web browser there are two
  • 00:37:11 options available one is to use the IP
  • 00:37:14 the docker container every docker
  • 00:37:16 container gets an IP assigned by default
  • 00:37:18 in this case it is 172 dot 17.0 2 but
  • 00:37:23 remember that this is an internal IP and
  • 00:37:26 is only accessible within the docker
  • 00:37:28 host so if you open a browser from
  • 00:37:31 within the docker host you can go to
  • 00:37:33 http colon forward slash forward slash
  • 00:37:37 172 dot 17 dot 0 dot 1 colon 5,000 to
  • 00:37:42 access the IP address but since this is
  • 00:37:46 an internal IP users outside of the
  • 00:37:49 docker host cannot access it using this
  • 00:37:51 IP for this we could use the IP of the
  • 00:37:55 docker host which is one ninety two dot
  • 00:37:57 one sixty eight dot 1.5 but for that to
  • 00:38:00 work you must have mapped the port
  • 00:38:03 inside the docker container to a free
  • 00:38:06 port on the docker host for example if I
  • 00:38:09 want the users to access my application
  • 00:38:11 through port 80 on my docker host I
  • 00:38:13 could map port 80 of localhost to port
  • 00:38:17 5000 on the docker container using the
  • 00:38:20 dash P parameter in my run command like
  • 00:38:23 this and so the user can access my
  • 00:38:28 application by going to the URL HTTP
  • 00:38:31 colon slash slash one ninety two dot one
  • 00:38:34 sixty eight dot one dot five colon 80
  • 00:38:37 and all traffic on port 80 on my daugher
  • 00:38:41 host will get routed to port 5000 inside
  • 00:38:44 the docker container this way you can
  • 00:38:48 run multiple instances of your
  • 00:38:50 application and map them to different
  • 00:38:52 ports on the docker host or run
  • 00:38:55 instances of different applications on
  • 00:38:57 different ports for example in this case
  • 00:38:59 and running an instance of MySQL that
  • 00:39:02 runs a database on my host and listens
  • 00:39:05 on the default MySQL port which happens
  • 00:39:08 to be three 3:06 or another instance of
  • 00:39:11 MySQL on another port eight 3:06 so you
  • 00:39:15 can run as many applications like this
  • 00:39:17 and map them to as many ports as you
  • 00:39:20 want and of course you cannot map to the
  • 00:39:23 same port on the docker host more than
  • 00:39:25 once we will
  • 00:39:28 because more about port mapping and
  • 00:39:29 networking of containers in the
  • 00:39:31 networked lecture later on let's now
  • 00:39:34 look at how data is persisted in a
  • 00:39:37 docker container for example let's say
  • 00:39:40 you were to run a MySQL container when
  • 00:39:42 databases and tables are created the
  • 00:39:45 data files are stored in location /wor
  • 00:39:48 Labe MySQL inside the docker container
  • 00:39:51 remember the docker container has its
  • 00:39:54 own isolated filesystem and any changes
  • 00:39:57 to any files happen within the container
  • 00:40:00 let's assume you dump a lot of data into
  • 00:40:03 the database what happens if you were to
  • 00:40:06 delete the MySQL container and remove it
  • 00:40:09 as soon as you do that the container
  • 00:40:12 along with all the data inside it gets
  • 00:40:15 blown away meaning all your data is gone
  • 00:40:18 if you would like to persist data you
  • 00:40:21 would want to map a directory outside
  • 00:40:24 the container on the docker host to a
  • 00:40:26 directory inside the container in this
  • 00:40:29 case I create a directory called /opt
  • 00:40:32 slash data dir and map that to var Lib
  • 00:40:37 MySQL inside the docker container using
  • 00:40:40 the – V option and specifying the
  • 00:40:43 directory on the door host followed by a
  • 00:40:45 colon and the directory inside the
  • 00:40:48 crocker container
  • 00:40:49 this way when docker container runs it
  • 00:40:52 will implicitly mount the external
  • 00:40:55 directory to a folder inside the docker
  • 00:40:57 container this way all your data will
  • 00:41:00 now be stored in the external volume at
  • 00:41:03 /opt slash data directory and thus will
  • 00:41:07 remain even if you delete the docker
  • 00:41:09 container the docker PS command is good
  • 00:41:13 enough to get basic details about
  • 00:41:15 containers like their names and ID's but
  • 00:41:18 if you would like to see additional
  • 00:41:20 details about a specific container use
  • 00:41:23 the docker inspect command and provide
  • 00:41:25 the container name or ID it returns all
  • 00:41:28 details of a container in a JSON format
  • 00:41:31 such as the state Mounds configuration
  • 00:41:34 data network settings etc remember to
  • 00:41:37 use it when you are required to find
  • 00:41:39 details on a container
  • 00:41:41 finally how do we see the logs of a
  • 00:41:44 container we're on in the background for
  • 00:41:46 example I ran my simple web application
  • 00:41:48 using the – D parameter and it ran the
  • 00:41:51 container in a detached mode how do I
  • 00:41:54 view the logs which happens to be the
  • 00:41:56 contents written to the standard out of
  • 00:41:58 that container use the docker logs
  • 00:42:01 command and specify the container ID or
  • 00:42:04 name like this well that's it for this
  • 00:42:08 lecture head over to the challenges and
  • 00:42:10 practice working with docker commands so
  • 00:42:23 to start with a simple web application
  • 00:42:24 written in Python this piece of code is
  • 00:42:28 used to create a web application that
  • 00:42:30 displays a web page with a background
  • 00:42:32 color if you look closely into the
  • 00:42:34 application code you will see a line
  • 00:42:37 that sets the background color to red
  • 00:42:39 now that works just fine however if you
  • 00:42:43 decide to change the color in the future
  • 00:42:45 you will have to change the application
  • 00:42:47 code it is a best practice to move such
  • 00:42:50 information out of the application code
  • 00:42:52 and into say an environment variable
  • 00:42:55 called app color the next time you run
  • 00:42:58 the application set an environment
  • 00:43:00 variable called app color to a desired
  • 00:43:03 value and the application now has a new
  • 00:43:06 color once your application gets
  • 00:43:08 packaged into a docker image you will
  • 00:43:11 then run it with the docker run command
  • 00:43:13 followed by the name of the image
  • 00:43:15 however if you wish to pass the
  • 00:43:18 environment variable as we did before he
  • 00:43:20 would now use the docker run commands –
  • 00:43:23 II option to set an environment variable
  • 00:43:26 within the container to deploy multiple
  • 00:43:29 containers with different colors he
  • 00:43:31 would run the docker command multiple
  • 00:43:34 times and set a different value for the
  • 00:43:36 environment variable each time so how do
  • 00:43:40 you find the environment variable set on
  • 00:43:42 a container that's already running use
  • 00:43:46 the docker inspect command to inspect
  • 00:43:48 the properties of a running container
  • 00:43:50 under the config section you will find
  • 00:43:52 the list of environment variables
  • 00:43:55 on the container well that's it for this
  • 00:43:58 lecture on configuring environment
  • 00:44:00 variables in docker
  • 00:44:02 [Music]
  • 00:44:10 hello and welcome to this lecture on
  • 00:44:13 docker images in this lecture we're
  • 00:44:16 going to see how to create your own
  • 00:44:19 image now before that why would you need
  • 00:44:23 to create your own image it could either
  • 00:44:25 be because you cannot find a component
  • 00:44:28 or a service that you want to use as
  • 00:44:30 part of your application on docker hub
  • 00:44:32 already or you and your team decided
  • 00:44:35 that the application you're developing
  • 00:44:37 will be derived for ease of shipping and
  • 00:44:40 deployment in this case I'm going to
  • 00:44:43 containerize an application a simple web
  • 00:44:47 application that I have built using the
  • 00:44:50 Python flask framework first we need to
  • 00:44:54 understand what we our container izing
  • 00:44:56 or what application we are creating an
  • 00:44:58 image for and how the application is
  • 00:45:01 built so start by thinking what you
  • 00:45:03 might do if you want to deploy the
  • 00:45:05 application manually we write down the
  • 00:45:08 steps required in the right order I'm
  • 00:45:10 creating an image for a simple web
  • 00:45:13 application if I were to set it up
  • 00:45:15 manually I would start with an operating
  • 00:45:18 system like Ubuntu then update the
  • 00:45:21 source repositories using the apt
  • 00:45:23 command then install dependencies using
  • 00:45:26 the apt command then install Python
  • 00:45:28 dependencies using the PIP command then
  • 00:45:31 copy over the source code of my
  • 00:45:33 application to a location like opt and
  • 00:45:36 then finally run the web server using
  • 00:45:39 the flask command now that I have the
  • 00:45:42 instructions create a docker file using
  • 00:45:45 this here's a quick overview of the
  • 00:45:47 process of creating your own image first
  • 00:45:50 create a docker file named docker file
  • 00:45:53 and write down the instructions for
  • 00:45:55 setting up your application in it such
  • 00:45:58 as installing dependencies where to copy
  • 00:46:01 the source code from and to and what the
  • 00:46:04 entry point of the application is etc
  • 00:46:06 once done build your image
  • 00:46:09 using the docker build command and
  • 00:46:11 specify the docker file as input as well
  • 00:46:14 as a tag named for the image this will
  • 00:46:17 create an image locally on your system
  • 00:46:20 to make it available on the public
  • 00:46:22 docker hub registry run the docker push
  • 00:46:26 command and specify the name of the
  • 00:46:29 image you just created in this case the
  • 00:46:33 name of the image is my account name
  • 00:46:36 which is M Amjad followed by the image
  • 00:46:39 name which is my custom app now let's
  • 00:46:44 take a closer look at that docker file
  • 00:46:46 docker file is a text file written in a
  • 00:46:49 specific format that docker can
  • 00:46:51 understand it's in an instruction and
  • 00:46:53 arguments format for example in this
  • 00:46:57 docker file everything on the left in
  • 00:47:00 caps is an instruction in this case from
  • 00:47:03 run copy and entry point are all
  • 00:47:07 instructions each of these instruct
  • 00:47:10 docker to perform a specific action
  • 00:47:12 while creating the image everything on
  • 00:47:15 the right is an argument to those
  • 00:47:17 instructions the first line from Ubuntu
  • 00:47:21 defines what the base OS should be for
  • 00:47:24 this container every docker image must
  • 00:47:27 be based off of another image either an
  • 00:47:30 OS or another image that was created
  • 00:47:33 before based on an OS you can find
  • 00:47:36 official releases of all operating
  • 00:47:38 systems on docker hub it's important to
  • 00:47:41 note that all docker files must start
  • 00:47:44 with a from instruction the run
  • 00:47:46 instruction instructs docker to run a
  • 00:47:49 particular command on those base images
  • 00:47:52 so at this point docker runs the apt-get
  • 00:47:55 update commands to fetch the updated
  • 00:47:57 packages and installs required
  • 00:48:00 dependencies on the image then the copy
  • 00:48:03 instruction copies files from the local
  • 00:48:06 system onto the docker image in this
  • 00:48:08 case the source code of our application
  • 00:48:10 is in the current folder and I will be
  • 00:48:12 copying it over to the location opt
  • 00:48:15 source code inside the docker image and
  • 00:48:18 finally entry point allows us to specify
  • 00:48:21 a command
  • 00:48:22 that will be run when the image is run
  • 00:48:25 as a container when docker builds the
  • 00:48:29 images it builds these in a layered
  • 00:48:31 architecture each line of instruction
  • 00:48:34 creates a new layer in the docker image
  • 00:48:37 with just the changes from the previous
  • 00:48:39 layer for example the first layer is a
  • 00:48:42 base Ubuntu OS followed by the second
  • 00:48:46 instruction that creates a second layer
  • 00:48:49 which installs all the apt packages and
  • 00:48:51 then the third instruction creates a
  • 00:48:53 third layer with the Python packages
  • 00:48:55 followed by the fourth layer that copies
  • 00:48:58 the source code over and the final layer
  • 00:49:00 that updates the entry point of the
  • 00:49:02 image since each layer only stores the
  • 00:49:05 changes from the previous layer it is
  • 00:49:08 reflected in the size as well if you
  • 00:49:10 look at the base opened to image it is
  • 00:49:13 around 120 MB in size the apt packages
  • 00:49:17 that I install is around 300 MB and the
  • 00:49:20 remaining layers are small you could see
  • 00:49:23 this information if you run the docker
  • 00:49:25 history command followed by the image
  • 00:49:27 name when you run the
  • 00:49:34 Bilka man you could see the various
  • 00:49:36 steps involved and the result of each
  • 00:49:38 task all the layers built are cast
  • 00:49:41 so the layered architecture helps you
  • 00:49:43 restart docker built from that
  • 00:49:45 particular step in case it fails or if
  • 00:49:48 you were to add new steps in the build
  • 00:49:50 process you wouldn't have to start all
  • 00:49:52 over again all the layers built are
  • 00:49:59 cached by docker so in case a particular
  • 00:50:02 step was to fail for example in this
  • 00:50:05 case step three failed and you were to
  • 00:50:08 fix the issue and rerun docker bill it
  • 00:50:11 will reuse the previous layers from
  • 00:50:14 cache and continue to build the
  • 00:50:16 remaining layers the same is true if you
  • 00:50:19 were to add additional steps in the
  • 00:50:22 docker file this way rebuilding your
  • 00:50:25 image is faster and you don't have to
  • 00:50:28 wait for docker to rebuild the entire
  • 00:50:30 image each time this is helpful
  • 00:50:33 especially when you update source code
  • 00:50:35 of your application as it may change
  • 00:50:38 more frequently only the layers above
  • 00:50:41 the updated layers needs to be rebuilt
  • 00:50:47 we just saw a number of products
  • 00:50:50 containerized such as databases
  • 00:50:53 development tools operating systems etc
  • 00:50:57 but that's just not it you can
  • 00:50:59 containerized almost all of the
  • 00:51:01 application even simple ones like
  • 00:51:03 browsers or utilities like curl
  • 00:51:06 applications like Spotify Skype etc
  • 00:51:10 basically you can containerize
  • 00:51:12 everything and going forward and see
  • 00:51:15 that's how everyone is going to run
  • 00:51:17 applications nobody is going to install
  • 00:51:20 anything anymore
  • 00:51:21 going forward instead they're just going
  • 00:51:24 to run it using docker and when they
  • 00:51:27 don't need it anymore
  • 00:51:28 get rid of it easily without having to
  • 00:51:31 clean up too much
  • 00:51:33 [Music]
  • 00:51:40 in this lecture we will look at commands
  • 00:51:44 arguments and entry points in docker
  • 00:51:47 let's start with a simple scenario say
  • 00:51:50 you were to run a docker container from
  • 00:51:52 an Ubuntu image when you run the docker
  • 00:51:54 run open to command it runs an instance
  • 00:51:57 of Ubuntu image and exits immediately if
  • 00:52:01 you were to list the running containers
  • 00:52:02 you wouldn't see the container running
  • 00:52:05 if you list all containers including
  • 00:52:07 those that are stopped you will see that
  • 00:52:10 the new container
  • 00:52:11 you ran is in an exited state now why is
  • 00:52:14 that unlike virtual machines containers
  • 00:52:18 are not meant to host an operating
  • 00:52:21 system containers are meant to run a
  • 00:52:23 specific task or process such as to host
  • 00:52:27 an instance of a web server or
  • 00:52:28 application server or a database or
  • 00:52:31 simply to carry out some kind of
  • 00:52:33 computation or analysis once the task is
  • 00:52:37 complete the container exits a container
  • 00:52:40 only lives as long as the process inside
  • 00:52:43 it is alive if the web service inside
  • 00:52:47 the container is docked or crashes the
  • 00:52:49 container exits so who defines what
  • 00:52:52 process is run within the container if
  • 00:52:55 you look at the docker file for popular
  • 00:52:57 docker images like ng INX you will see
  • 00:53:00 an instruction called CMD which stands
  • 00:53:03 for command that defines the program
  • 00:53:05 that will be run within the container
  • 00:53:07 when it starts for the ng INX image it
  • 00:53:10 is the ng INX command for the MySQL
  • 00:53:13 image it is the MySQL d command what we
  • 00:53:17 tried to do earlier was to run a
  • 00:53:19 container with a plain Ubuntu operating
  • 00:53:22 system let us look at the docker file
  • 00:53:25 for this image you will see that it uses
  • 00:53:28 bash as the default command now bash is
  • 00:53:32 not really a process like a web server
  • 00:53:35 or database server it is a shell that
  • 00:53:38 listens for inputs from a terminal if it
  • 00:53:41 cannot find a terminal it exits when we
  • 00:53:44 ran the ubuntu container earlier
  • 00:53:47 created a container from the Ubuntu
  • 00:53:49 image and launched the bash program by
  • 00:53:53 default docker does not attach a
  • 00:53:55 terminal to a container when it is run
  • 00:53:57 and so the bash program does not find
  • 00:54:00 the terminal and so it exits since the
  • 00:54:04 process that was started when the
  • 00:54:06 container was created finished the
  • 00:54:09 container exits as well so how do you
  • 00:54:12 specify a different command to start the
  • 00:54:15 container one option is to append a
  • 00:54:18 command to the docker run command and
  • 00:54:20 that way it overrides the default
  • 00:54:23 command specified within the image in
  • 00:54:25 this case I run the docker run Ubuntu
  • 00:54:28 command with the sleep 5 command as the
  • 00:54:31 added option this way when the container
  • 00:54:34 starts it runs the sleep program waits
  • 00:54:37 for 5 seconds and then exits but how do
  • 00:54:41 you make that change permanent say you
  • 00:54:43 want the image to always run the sleep
  • 00:54:46 command when it starts you would then
  • 00:54:48 create your own image from the base
  • 00:54:51 ubuntu image and specify a new command
  • 00:54:54 there are different ways of specifying
  • 00:54:56 the command either the command simply as
  • 00:54:59 is in a shell form or in a JSON array
  • 00:55:02 format like this but remember when you
  • 00:55:06 specify in a JSON array format the first
  • 00:55:09 element in the array should be the
  • 00:55:11 executable in this case the sleep
  • 00:55:14 program do not specify the command and
  • 00:55:17 parameters together like this the
  • 00:55:20 command and its parameters should be
  • 00:55:22 separate elements in the list so I now
  • 00:55:26 build my new image using the docker
  • 00:55:28 build command and name it as Ubuntu
  • 00:55:31 sleeper I could now simply run the
  • 00:55:34 docker ubuntu sleeper command and get
  • 00:55:37 the same results it always sleeps for 5
  • 00:55:41 seconds and exits but what if I wish to
  • 00:55:44 change the number of seconds it sleeps
  • 00:55:47 currently it is hard-coded
  • 00:55:49 to 5 seconds as we learned before one
  • 00:55:52 option is to run the docker run command
  • 00:55:55 with the new command appended to it in
  • 00:55:57 this case sleep 10
  • 00:56:00 so the command that will be run at
  • 00:56:01 startup will be sleep 10 but it doesn't
  • 00:56:05 look very good the name of the image
  • 00:56:07 ubuntu sleeper in itself implies that
  • 00:56:10 the container will sleep so we shouldn't
  • 00:56:12 have to specify the sleep command again
  • 00:56:14 instead we would like it to be something
  • 00:56:17 like this docker run ubuntu sleeper 10
  • 00:56:21 we only want to pass in the number of
  • 00:56:24 seconds the container should sleep and
  • 00:56:26 sleep command should be invoked
  • 00:56:28 automatically and that is where the
  • 00:56:31 entry point instructions comes into play
  • 00:56:33 the entry point instruction is like the
  • 00:56:36 command instruction as in you can
  • 00:56:38 specify the program that will be run
  • 00:56:40 when the container starts and whatever
  • 00:56:43 you specify on the command line in this
  • 00:56:46 case 10 will get appended to the entry
  • 00:56:49 point so the command that will be run
  • 00:56:51 when the container starts is sleep 10 so
  • 00:56:55 that's the difference between the two in
  • 00:56:57 case of the CMD instruction the command
  • 00:57:01 line parameters passed will get replaced
  • 00:57:03 entirely whereas in case of entry point
  • 00:57:06 the command line parameters will get
  • 00:57:09 appended now in the second case what if
  • 00:57:12 I run the ubuntu sleeper image command
  • 00:57:15 without appending the number of seconds
  • 00:57:17 then the command at startup will be just
  • 00:57:21 sleep and you get the error that the
  • 00:57:24 operand is missing so how do you
  • 00:57:27 configure a default value for the
  • 00:57:29 command if one was not specified in the
  • 00:57:32 command line that's where you would use
  • 00:57:34 both entry point as well as the command
  • 00:57:37 instruction in this case the command
  • 00:57:40 instruction will be appended to the
  • 00:57:42 entry point instruction so at startup
  • 00:57:45 the command would be sleep 5 if you
  • 00:57:47 didn't specify any parameters in the
  • 00:57:50 command line if you did then that will
  • 00:57:52 override the command instruction and
  • 00:57:54 remember for this to happen you should
  • 00:57:57 always specify the entry point and
  • 00:57:59 command instructions in a JSON format
  • 00:58:03 finally what if you freely really want
  • 00:58:06 to modify the entry point during runtime
  • 00:58:09 say from sleep to an imaginary sleep 2.0
  • 00:58:14 well in that case you can override it by
  • 00:58:17 using the entry point option in the
  • 00:58:20 docker run command the final command at
  • 00:58:23 startup would then be sleep 2.0 10 well
  • 00:58:28 that's it for this lecture and I will
  • 00:58:31 see you in the next we now look at
  • 00:58:42 networking in docker when you install
  • 00:58:46 docker it creates three networks
  • 00:58:48 automatically bridge no and host bridge
  • 00:58:53 is the default network a container gets
  • 00:58:56 attached to if you would like to
  • 00:58:58 associate the container with any other
  • 00:59:00 network you specify the network
  • 00:59:03 information using the network command
  • 00:59:05 line parameter like this we will now
  • 00:59:09 look at each of these networks the BRIT
  • 00:59:12 network is a private internal network
  • 00:59:16 created by docker on the host all
  • 00:59:19 containers attached to this network by
  • 00:59:21 default and they get an internal IP
  • 00:59:24 address usually in the range 170 2.17
  • 00:59:27 series the containers can access each
  • 00:59:31 other using this internal IP if required
  • 00:59:34 to access any of these containers from
  • 00:59:37 the outside world map the ports of these
  • 00:59:40 containers to ports on the docker host
  • 00:59:43 as we have seen before another way to
  • 00:59:47 access the containers externally is to
  • 00:59:49 associate the container to the hosts
  • 00:59:51 network this takes out any network
  • 00:59:54 isolation between the docker host and
  • 00:59:56 the docker container meaning if you were
  • 00:59:59 to run a web server on port 5,000 in a
  • 01:00:02 web app container it is automatically as
  • 01:00:04 accessible on the same port externally
  • 01:00:07 without requiring any port mapping as
  • 01:00:10 the web container uses the hosts network
  • 01:00:13 this would also mean that unlike before
  • 01:00:16 you will now not be able to run multiple
  • 01:00:19 web containers on the same host on the
  • 01:00:22 same port as the ports are now common to
  • 01:00:25 all containers in the hole
  • 01:00:27 network with the non network the
  • 01:00:32 containers are not attached to any
  • 01:00:34 network and doesn't have any access to
  • 01:00:37 the external network or other containers
  • 01:00:40 they run in an isolated Network so we
  • 01:00:46 just saw the default burst network with
  • 01:00:49 the network ID 170 2.72 0.1 so all
  • 01:00:53 containers associated to this default
  • 01:00:55 network will be able to communicate to
  • 01:00:58 each other but what if we wish to
  • 01:01:00 isolate the containers within the docker
  • 01:01:02 host for example the first two web
  • 01:01:05 containers on internal network 172 and
  • 01:01:08 the second two containers on a different
  • 01:01:11 internal network like 182 by default
  • 01:01:15 docker only creates one internal bridge
  • 01:01:18 network we could create our own internal
  • 01:01:21 network using the command docker network
  • 01:01:24 create and specified the driver which is
  • 01:01:26 bridge in this case and the subnet for
  • 01:01:29 that network followed by the custom
  • 01:01:31 isolated network name run the docker
  • 01:01:34 network LS command to list all networks
  • 01:01:37 so how do we see the network settings
  • 01:01:40 and the IP address assigned to an
  • 01:01:42 existing container run the docker
  • 01:01:45 inspect command with the ID or name of
  • 01:01:48 the container and you will find a
  • 01:01:50 section on network settings there you
  • 01:01:53 can see the type of network the
  • 01:01:55 container is attached to is internal IP
  • 01:01:57 address MAC address and other settings
  • 01:02:03 containers can reach each other using
  • 01:02:05 their names for example in this case I
  • 01:02:08 have a webserver and a MySQL database
  • 01:02:11 container running on the same node how
  • 01:02:14 can I get my web server to access the
  • 01:02:16 database on the database container one
  • 01:02:19 thing I could do is to use the internal
  • 01:02:21 IP address signed to the MySQL container
  • 01:02:24 which in this case is 170 2.72 0.3 but
  • 01:02:29 that is not very ideal because it is not
  • 01:02:31 guaranteed that the container will get
  • 01:02:34 the same IP when the system reboots the
  • 01:02:38 right
  • 01:02:38 way to do it is to use the container
  • 01:02:41 name all containers in a docker host can
  • 01:02:44 resolve each other
  • 01:02:45 with the name of the container docker
  • 01:02:48 has a built-in DNS server that helps the
  • 01:02:51 containers to resolve each other using
  • 01:02:54 the container name
  • 01:02:55 note that the built in DNS server always
  • 01:02:58 runs at address 127 dot 0 dot 0 dot 11
  • 01:03:03 so how does docker implement networking
  • 01:03:07 what's the technology behind it like how
  • 01:03:10 are the containers isolated within the
  • 01:03:12 host docker uses network namespaces that
  • 01:03:17 creates a separate name space for each
  • 01:03:20 container it then uses virtual Ethernet
  • 01:03:23 pairs to connect containers together
  • 01:03:27 well that's all we can talk about it for
  • 01:03:30 now more about these or advanced
  • 01:03:32 concepts that we discussed in the
  • 01:03:35 advanced course on docker on code cloud
  • 01:03:38 that's all for now from this lecture on
  • 01:03:41 networking head over to the practice
  • 01:03:43 tests and practice working with
  • 01:03:45 networking in docker I will see you in
  • 01:03:48 the next lecture
  • 01:03:50 [Music]
  • 01:03:58 hello and welcome to this lecture and we
  • 01:04:01 are learning advanced docker concepts in
  • 01:04:04 this lecture we're going to talk about
  • 01:04:05 docker storage drivers and file systems
  • 01:04:08 we're going to see where and how docker
  • 01:04:11 stores data and how it manages file
  • 01:04:14 systems of the containers let us start
  • 01:04:17 with how docker stores data on the local
  • 01:04:20 file system when you install docker on a
  • 01:04:24 system it creates this folder structure
  • 01:04:27 at where lib docker you have multiple
  • 01:04:31 folders under it called a ufs containers
  • 01:04:34 image volumes etc this is where docker
  • 01:04:37 stores all its data by default when I
  • 01:04:41 say data
  • 01:04:42 I mean files related to images and
  • 01:04:44 containers running on the docker host
  • 01:04:46 for example all files related to
  • 01:04:49 containers are stored under the
  • 01:04:51 containers folder and the files related
  • 01:04:54 to images are stored under the image
  • 01:04:56 folder any volumes created by the docker
  • 01:04:59 containers are created under the volumes
  • 01:05:01 folder well don't worry about that for
  • 01:05:03 now we will come back to that in a bit
  • 01:05:06 for now let's just understand where
  • 01:05:09 docker stores its files and in what
  • 01:05:12 format so how exactly does docker store
  • 01:05:16 the files of an image and a container to
  • 01:05:19 understand that we need to understand
  • 01:05:21 Dockers layered architecture let's
  • 01:05:24 quickly recap something we learned when
  • 01:05:26 docker builds images it builds these in
  • 01:05:29 a layered architecture each line of
  • 01:05:32 instruction in the docker file creates a
  • 01:05:35 new layer in the docker image with just
  • 01:05:38 the changes from the previous layer for
  • 01:05:40 example the first layer is a base Ubuntu
  • 01:05:43 operating system followed by the second
  • 01:05:46 instruction that creates a second layer
  • 01:05:48 which installs all the apt packages and
  • 01:05:51 then the third instruction creates a
  • 01:05:54 third layer which with the Python
  • 01:05:56 packages followed by the fourth layer
  • 01:05:59 that copies the source code over and
  • 01:06:01 then finally the fifth layer that
  • 01:06:03 updates the entry point of the image
  • 01:06:08 each layer only stores the changes from
  • 01:06:11 the previous layer it is reflected in
  • 01:06:14 the size as well if you look at the base
  • 01:06:17 open to image it is around 120 megabytes
  • 01:06:20 in size the apt packages that I install
  • 01:06:23 is around 300 MB and then the remaining
  • 01:06:26 layers are small to understand the
  • 01:06:30 advantages of this layered architecture
  • 01:06:32 let's consider a second application this
  • 01:06:36 application has a different talker file
  • 01:06:39 but it's very similar to our first
  • 01:06:41 application as in it uses the same base
  • 01:06:44 image as Ubuntu uses the same Python and
  • 01:06:48 flask dependencies but uses a different
  • 01:06:51 source code to create a different
  • 01:06:53 application and so a different entry
  • 01:06:56 point as well when I run the docker
  • 01:06:58 build command to build a new image for
  • 01:07:01 this application since the first three
  • 01:07:04 layers of both the applications are the
  • 01:07:06 same docker is not going to build the
  • 01:07:09 first three layers instead it reuses the
  • 01:07:13 same three layers it built for the first
  • 01:07:16 application from the cache and only
  • 01:07:18 creates the last two layers with the new
  • 01:07:21 sources and the new entry point this way
  • 01:07:25 docker builds images faster and
  • 01:07:28 efficiently saves disk space this is
  • 01:07:31 also applicable if you were to update
  • 01:07:34 your application code whenever you
  • 01:07:36 update your application code such as the
  • 01:07:39 app dot py in this case docker simply
  • 01:07:42 reuses all the previous layers from
  • 01:07:44 cache and quickly rebuilds the
  • 01:07:47 application image by updating the latest
  • 01:07:50 source code thus saving us a lot of time
  • 01:07:53 hearing rebuilds and updates let's
  • 01:07:58 rearrange the layers bottom up so we can
  • 01:08:01 understand it better at the bottom we
  • 01:08:04 have the base ubuntu layer than the
  • 01:08:06 packages then the dependencies and then
  • 01:08:09 the source code of the application and
  • 01:08:11 then the entry point all of these layers
  • 01:08:15 are created when we run the docker build
  • 01:08:18 command to form the final docker image
  • 01:08:21 so all of these are the darker image
  • 01:08:24 layers once the build is complete you
  • 01:08:27 cannot modify the contents of these
  • 01:08:29 layers and so they are read-only and you
  • 01:08:32 can only modify them by initiating a new
  • 01:08:35 build when you run a container based off
  • 01:08:38 of this image using the docker run
  • 01:08:41 command docker creates a container based
  • 01:08:44 off of these layers and creates a new
  • 01:08:46 writable layer on top of the image layer
  • 01:08:49 the writable layer is used to store data
  • 01:08:52 created by the container such as log
  • 01:08:55 files written by the applications any
  • 01:08:57 temporary files generated by the
  • 01:08:59 container or just any file modified by
  • 01:09:02 the user on that container the life of
  • 01:09:06 this layer though is only as long as the
  • 01:09:08 container is alive when the container is
  • 01:09:11 destroyed this layer and all of the
  • 01:09:13 changes stored in it are also destroyed
  • 01:09:16 remember that the same image layer is
  • 01:09:19 shared by all containers created using
  • 01:09:22 this image if I were to log in to the
  • 01:09:26 newly created container and say create a
  • 01:09:29 new file called temp dot txt it will
  • 01:09:33 create that file in the container layer
  • 01:09:35 which is read and write we just said
  • 01:09:38 that the files in the image layer are
  • 01:09:41 read-only meaning you cannot edit
  • 01:09:43 anything in those layers let's take an
  • 01:09:46 example of our application code since we
  • 01:09:49 bake our code into the image the code is
  • 01:09:52 part of the image layer and as such is
  • 01:09:54 read-only after running a container what
  • 01:09:58 if I wish to modify the source code to
  • 01:10:00 say test a change remember the same
  • 01:10:04 image layer may be shared between
  • 01:10:06 multiple containers created from this
  • 01:10:08 image so does it mean that I cannot
  • 01:10:11 modify this file inside the container no
  • 01:10:14 I can still modify this file but before
  • 01:10:18 I save the modified file docker
  • 01:10:20 automatically creates a copy of the file
  • 01:10:23 in the read/write layer and I will then
  • 01:10:25 be modifying a different version of the
  • 01:10:27 file in the readwrite layer all future
  • 01:10:31 modifications will be done on this copy
  • 01:10:33 of the file in the readwrite
  • 01:10:35 this is called copy-on-write mechanism
  • 01:10:38 the image layer being read-only just
  • 01:10:41 means that the files in these layers
  • 01:10:43 will not be modified in the image itself
  • 01:10:46 so the image will remain the same all
  • 01:10:49 the time until you rebuild the image
  • 01:10:51 using the docker build command what
  • 01:10:56 happens when we get rid of the container
  • 01:10:57 all of the data that was stored in the
  • 01:11:01 container layer also gets deleted the
  • 01:11:04 change we made to the app dot py and the
  • 01:11:07 new temp file we created we'll also get
  • 01:11:10 removed so what if we wish to persist
  • 01:11:13 this data for example if we were working
  • 01:11:16 with our database and we would like to
  • 01:11:18 preserve the data created by the
  • 01:11:20 container we could add a persistent
  • 01:11:22 volume to the container to do this first
  • 01:11:26 create a volume using the docker volume
  • 01:11:29 create command so when I run the docker
  • 01:11:32 volume create data underscore volume
  • 01:11:34 command it creates a folder called data
  • 01:11:38 underscore volume under the var Lib
  • 01:11:41 docker volumes directory then when I run
  • 01:11:46 the docker container using the docker
  • 01:11:47 run command I could mount this volume
  • 01:11:50 inside the docker containers read/write
  • 01:11:52 layer using the dash V option like this
  • 01:11:55 so I would do a docker run – V then
  • 01:11:59 specify my newly created volume name
  • 01:12:01 followed by a colon and the location
  • 01:12:04 inside my container which is the default
  • 01:12:07 location where MySQL stores data and
  • 01:12:09 that is where Lib MySQL and then the
  • 01:12:13 image name MySQL this will create a new
  • 01:12:17 container and mount the data volume we
  • 01:12:20 created into very Lib MySQL folder
  • 01:12:23 inside the container so all data are
  • 01:12:26 written by the database is in fact
  • 01:12:28 stored on the volume created on the
  • 01:12:31 docker host even if the container is
  • 01:12:34 destroyed the data is still active now
  • 01:12:37 what if you didn't run the docker volume
  • 01:12:40 create command to create the volume
  • 01:12:41 before the docker run command for
  • 01:12:44 example if I run the docker run command
  • 01:12:46 to create a new instance of
  • 01:12:49 my SQL container with the volume data
  • 01:12:52 underscore volume to which I have not
  • 01:12:54 created yet docker will automatically
  • 01:12:57 create a volume named data underscore
  • 01:13:00 volume to and mounted to the container
  • 01:13:03 you should be able to see all these
  • 01:13:06 volumes if you list the contents of the
  • 01:13:09 VAR Lib docker volumes folder this is
  • 01:13:13 called volume mounting as we are
  • 01:13:16 mounting a volume created by docker
  • 01:13:18 under the var Lib docker volumes folder
  • 01:13:21 but what if we had our data already at
  • 01:13:25 another location for example let's say
  • 01:13:27 we have some external storage on the
  • 01:13:30 docker host at four slash data and we
  • 01:13:33 would like to store database data on
  • 01:13:35 that volume and not in the default ver
  • 01:13:38 Lib docker volumes folder in that case
  • 01:13:42 we would run a container using the
  • 01:13:44 command docker run – V but in this case
  • 01:13:47 we will provide the complete path to the
  • 01:13:50 folder we would like to mount that is
  • 01:13:52 four slash data for / – QL and so it
  • 01:13:56 will create a container and mount the
  • 01:13:59 folder to the container this is called
  • 01:14:02 bind mounting so there are two types of
  • 01:14:05 mounts a volume mounting and a bind
  • 01:14:07 mount volume mount mounts a volume from
  • 01:14:11 the volumes directory and bind mount
  • 01:14:13 mounts a directory from any location on
  • 01:14:16 the docker host
  • 01:14:18 one final point note before I let you go
  • 01:14:21 using the dash V is an old style the new
  • 01:14:26 way is to use dash mount option the dash
  • 01:14:29 dash mount is the preferred way as it is
  • 01:14:32 more verbose so you have to specify each
  • 01:14:35 parameter in a key equals value format
  • 01:14:38 for example the previous command can be
  • 01:14:41 written with the dash mount option as
  • 01:14:43 this using the type source and target
  • 01:14:46 options the type in this case is bind
  • 01:14:49 the source is the location on my host
  • 01:14:52 and target is the location on my
  • 01:14:55 container so who is responsible for
  • 01:15:00 doing all of these operations
  • 01:15:03 maintaining the layered architecture
  • 01:15:04 creating a writable layer moving files
  • 01:15:08 across layers to enable copy and write
  • 01:15:10 etc it's the storage drivers so Dockery
  • 01:15:14 uses storage drivers to enable layered
  • 01:15:16 architecture some of the common storage
  • 01:15:19 drivers are au FS btrfs ZFS device
  • 01:15:24 mapper overlay and overlay to the
  • 01:15:27 selection of the storage driver depends
  • 01:15:30 on the underlying OS being used for
  • 01:15:32 example we to bond to the default
  • 01:15:35 storage driver is a u FS whereas this
  • 01:15:38 storage driver is not available on other
  • 01:15:40 operating systems like fedora or Sint OS
  • 01:15:42 in that case device mapper may be a
  • 01:15:46 better option docker will choose the
  • 01:15:49 best storage driver available
  • 01:15:51 automatically based on the operating
  • 01:15:53 system the different storage drivers
  • 01:15:56 also provide different performance and
  • 01:15:58 stability characteristics so you may
  • 01:16:01 want to choose one that fits the needs
  • 01:16:03 of your application and your
  • 01:16:05 organization if you would like to read
  • 01:16:08 more on any of these storage drivers
  • 01:16:10 please refer to the links in the
  • 01:16:12 attached documentation for now that is
  • 01:16:15 all from the darker architecture
  • 01:16:18 concepts see you in the next lecture
  • 01:16:21 [Music]
  • 01:16:30 hello and welcome to this lecture on
  • 01:16:32 docker compose going forward we will be
  • 01:16:36 working with configurations in yamo file
  • 01:16:39 so it is important that you are
  • 01:16:41 comfortable with llamo let's recap a few
  • 01:16:45 things real quick course we first
  • 01:16:48 learned how to run a docker container
  • 01:16:50 using the docker run command if we
  • 01:16:53 needed to set up a complex application
  • 01:16:55 running multiple services a better way
  • 01:16:59 to do it is to use docker compose with
  • 01:17:02 docker compose we could create a
  • 01:17:04 configuration file in yamo format called
  • 01:17:07 docker compose or ml and put together
  • 01:17:11 the different services and the options
  • 01:17:14 specific to this to running them in this
  • 01:17:17 file then we could simply run a docker
  • 01:17:21 compose up command to bring up the
  • 01:17:23 entire application stack this is easier
  • 01:17:26 to implement run and maintain as all
  • 01:17:29 changes are always stored in the docker
  • 01:17:31 compose configuration file however this
  • 01:17:34 is all only applicable to running
  • 01:17:36 containers on a single docker host and
  • 01:17:39 for now don't worry about the yamo file
  • 01:17:42 we will take a closer look at the yamo
  • 01:17:44 file in a bit and see how to put it
  • 01:17:47 together that was a really simple
  • 01:17:50 application that I put together let us
  • 01:17:52 look at a better example I'm going to
  • 01:17:55 use the same sample application that
  • 01:17:57 everyone uses to demonstrate docker it's
  • 01:18:00 a simple yet comprehensive application
  • 01:18:03 developed by docker to demonstrate the
  • 01:18:06 various features available in running an
  • 01:18:08 application stack on docker
  • 01:18:11 so let's first get familiarized with the
  • 01:18:14 application because we will be working
  • 01:18:16 with the same application in different
  • 01:18:19 sections through the rest of this course
  • 01:18:22 this is a sample voting application
  • 01:18:24 which provides an interface for a user
  • 01:18:28 to vote and another interface to show
  • 01:18:31 the results the applique
  • 01:18:33 consists of various components such as
  • 01:18:36 the voting app which is a web
  • 01:18:38 application developed in Python to
  • 01:18:40 provide the user with an interface to
  • 01:18:43 choose between two options a cat and a
  • 01:18:46 dog when you make a selection the vote
  • 01:18:50 is stored in Redis for those of you who
  • 01:18:53 are new to Redis Redis in this case
  • 01:18:55 serves as a database in memory this vote
  • 01:18:59 is then processed by the worker which is
  • 01:19:01 an application written in dotnet the
  • 01:19:03 worker application takes the new vote
  • 01:19:06 and updates the persistent database
  • 01:19:08 which is a post-grad SQL in our case the
  • 01:19:12 Postgres SQL simply has a table with the
  • 01:19:15 number of votes for each category cats
  • 01:19:17 and dogs in this case it increments the
  • 01:19:20 number of votes for cats as our ward was
  • 01:19:23 for cats finally the result of the vote
  • 01:19:26 is displayed in a web interface which is
  • 01:19:29 another web application developed in
  • 01:19:31 node.js this resulting application reads
  • 01:19:35 the count of votes from the Postgres
  • 01:19:37 sequel database and display fit to the
  • 01:19:40 user so that is the architecture and
  • 01:19:43 data flow of this simple voting
  • 01:19:46 application stack as you can see this
  • 01:19:49 sample application is built with a
  • 01:19:52 combination of different services
  • 01:19:54 different development tools and multiple
  • 01:19:57 different development platforms such as
  • 01:20:00 Python node.js net etc this sample
  • 01:20:05 application will be used to showcase how
  • 01:20:08 easy it is to set up an entire
  • 01:20:10 application stack consisting of diverse
  • 01:20:13 components in docker let us keep aside
  • 01:20:17 docker swarm services and stacks for a
  • 01:20:20 minute and see how we can put together
  • 01:20:22 this application stack on a single
  • 01:20:26 docker engine using first docker run
  • 01:20:30 commands and then docker compose let us
  • 01:20:33 assume that all images of applications
  • 01:20:36 are already built and are available on
  • 01:20:39 docker repository let us start with the
  • 01:20:42 data layer first we run the docker run
  • 01:20:45 command to start an instance
  • 01:20:46 of redis by running the docker run Redis
  • 01:20:49 command we will add the – D parameter to
  • 01:20:53 run this container in the background and
  • 01:20:55 we will also name the container Redis
  • 01:20:58 now naming the containers is important
  • 01:21:01 why is that important
  • 01:21:03 hold that thought we will come to that
  • 01:21:05 in a bit next we will deploy the
  • 01:21:09 Postgres sequel database by running the
  • 01:21:11 docker run Postgres command this time –
  • 01:21:15 we will add the – d option to run this
  • 01:21:18 in the background and name this
  • 01:21:20 container DB for database next we will
  • 01:21:24 start with the application services we
  • 01:21:27 will deploy a front-end app for voting
  • 01:21:29 interface by running an instance of
  • 01:21:31 voting app image run the docker run
  • 01:21:34 command and name the instance vote since
  • 01:21:37 this is a web server it has a web UI
  • 01:21:39 instance running on port 80 we will
  • 01:21:42 publish that port to 5000 on the host
  • 01:21:45 system so we can access it from a
  • 01:21:47 browser next we will deploy the result
  • 01:21:50 web application that shows the results
  • 01:21:52 to the user for this we deploy a
  • 01:21:55 container using the results – app image
  • 01:21:58 and publish port 80 – port 5001 on the
  • 01:22:02 host this way we can access the web UI
  • 01:22:04 of the resulting app on a browser
  • 01:22:07 finally we deployed the worker by
  • 01:22:10 running an instance of the worker image
  • 01:22:13 okay now this is all good and we can see
  • 01:22:17 that all the instances are running on
  • 01:22:19 the host but there is some problem it
  • 01:22:23 just does not seem to work the problem
  • 01:22:26 is that we have successfully run all the
  • 01:22:28 different containers but we haven't
  • 01:22:31 actually linked them together as in we
  • 01:22:34 haven't told the voting web application
  • 01:22:36 to use this particular Redis instance
  • 01:22:38 there could be multiple Redis instances
  • 01:22:41 running we haven't told the worker and
  • 01:22:44 the resulting app to use this particular
  • 01:22:47 Postgres sequel database that we ran so
  • 01:22:50 how do we do that that is where we use
  • 01:22:53 links link is a command line option
  • 01:22:57 which can be used to link to containers
  • 01:23:00 to
  • 01:23:00 for example the voting app web service
  • 01:23:04 is dependent on the Redis service when
  • 01:23:08 the web server starts as you can see in
  • 01:23:10 this piece of code on the web server it
  • 01:23:13 looks for a Redis service running on
  • 01:23:15 host Redis but the voting app container
  • 01:23:18 cannot resolve a host by the name Redis
  • 01:23:21 to make the voting app aware of the
  • 01:23:24 Redis service we add a link option while
  • 01:23:27 running the voting app container to link
  • 01:23:30 it to the Redis container adding a – –
  • 01:23:33 link option to the docker run command
  • 01:23:35 and specifying the name of the Redis
  • 01:23:38 container which is which in this case is
  • 01:23:40 Redis followed by a colon and the name
  • 01:23:43 of the host that the voting app is
  • 01:23:45 looking for which is also Redis in this
  • 01:23:48 case remember that this is why we named
  • 01:23:51 the container when we ran it the first
  • 01:23:54 time so we could use its name while
  • 01:23:56 creating a link what this is in fact
  • 01:24:00 doing is it creates an entry into the
  • 01:24:03 etc' host file on the voting app
  • 01:24:05 container adding an entry with a host
  • 01:24:07 name rebus with an internal IP of the
  • 01:24:09 Redis container
  • 01:24:11 similarly we add a link for the result
  • 01:24:14 app to communicate with the database by
  • 01:24:17 adding a link option to refer the
  • 01:24:19 database by the name DB as you can see
  • 01:24:23 in this source code of the application
  • 01:24:25 it makes an attempt to connect to a
  • 01:24:27 Postgres database on hosts DB finally
  • 01:24:32 the worker application requires access
  • 01:24:34 to both the Redis as well as the
  • 01:24:36 Postgres database so we add two links to
  • 01:24:39 the worker application one link to link
  • 01:24:42 the Redis and the other link to link
  • 01:24:44 Postgres database note that using links
  • 01:24:49 this way is deprecated and the support
  • 01:24:52 may be removed in future in docker this
  • 01:24:56 is because as we will see in some time
  • 01:24:59 advanced and newer concepts in docker
  • 01:25:02 swarm and networking supports better
  • 01:25:04 ways of achieving what we just did here
  • 01:25:07 with links but I wanted to mention that
  • 01:25:10 anyway so you learned the concept from
  • 01:25:13 the very
  • 01:25:13 basics once we have the docker run
  • 01:25:16 commands tested and ready it is easy to
  • 01:25:19 generate a docker compose file from it
  • 01:25:21 we start by creating a dictionary of
  • 01:25:24 container names we will use the same
  • 01:25:26 name we used in the docker run commands
  • 01:25:28 so we take all the names and create a
  • 01:25:30 key with each of them then under each
  • 01:25:33 item we specify which image to use the
  • 01:25:37 key is the image and the value is the
  • 01:25:40 name of the image to use next inspect
  • 01:25:44 the commands and see what are the other
  • 01:25:45 options used we published ports so let's
  • 01:25:49 move those ports under the respective
  • 01:25:51 containers so we create a property
  • 01:25:54 called ports and lists all the ports
  • 01:25:56 that you would like to publish under
  • 01:25:58 that finally we are left with links so
  • 01:26:02 whichever container requires a link
  • 01:26:04 created properly under it called links
  • 01:26:07 and provide an array of links such as
  • 01:26:09 Redis or TB note that you could also
  • 01:26:14 specify the name of the link this way
  • 01:26:16 without the semicolon and and the target
  • 01:26:19 target name and it will create a link
  • 01:26:22 with the same name as the target name
  • 01:26:25 specifying the DB : DB is similar to
  • 01:26:29 simply specify DB we will assume the
  • 01:26:33 same value to create a link now that we
  • 01:26:36 are all done with our docker compose
  • 01:26:38 file bringing up the stack is really
  • 01:26:40 simple from the docker compose up
  • 01:26:43 command to bring up the entire
  • 01:26:44 application stack when we looked
  • 01:26:52 example of the sample voting application
  • 01:26:54 we assumed that all images are already
  • 01:26:58 built out of the five different
  • 01:27:01 components two of them Redis and
  • 01:27:03 Postgres images we know are already
  • 01:27:06 available on docker hub there are
  • 01:27:08 official images from Redis and Postgres
  • 01:27:11 but the remaining three are our own
  • 01:27:14 application it is not necessary that
  • 01:27:17 they are already built and available in
  • 01:27:19 a docker registry if we would like to
  • 01:27:22 instruct docker compose to run a docker
  • 01:27:25 bill instead of trying to pull an image
  • 01:27:27 we can replace the image line with a
  • 01:27:30 build line and specify the location of a
  • 01:27:33 directory which contains the application
  • 01:27:35 code and a docker file with instructions
  • 01:27:38 to build the docker image in this
  • 01:27:41 example for the voting app I have all
  • 01:27:44 the application code in a folder named
  • 01:27:46 vote which contains all application code
  • 01:27:49 and a docker file this time when you run
  • 01:27:53 the docker compose up command it will
  • 01:27:55 first build the images give a temporary
  • 01:27:58 name for it and then use those images to
  • 01:28:01 run containers using the options you
  • 01:28:04 specified before similarly use build
  • 01:28:08 option to build the two other services
  • 01:28:11 from the respective folders we will now
  • 01:28:16 look at different versions of docker
  • 01:28:19 compose file this is important because
  • 01:28:21 you might see docker compose files in
  • 01:28:24 different formats at different places
  • 01:28:27 and wonder white-sand look different
  • 01:28:29 docker compose evolved over time and now
  • 01:28:33 supports a lot more options than it did
  • 01:28:35 in the beginning for example this is the
  • 01:28:38 trimmed down version of the docker
  • 01:28:41 compose file we used earlier this is in
  • 01:28:44 fact the original version of docker
  • 01:28:46 compose file known as version 1 this had
  • 01:28:50 a number of limitations for example if
  • 01:28:53 you wanted to deploy containers on a
  • 01:28:55 different network other than the default
  • 01:28:58 Bridge network there was no way of
  • 01:29:01 specifying that in this version of the
  • 01:29:03 file also say you have a dependency
  • 01:29:06 or startup order of some kind for
  • 01:29:08 example your database container must
  • 01:29:11 come up first and only then and should
  • 01:29:13 the voting application be started there
  • 01:29:16 was no way you could specify that in the
  • 01:29:18 ocean one of the docker compose file
  • 01:29:21 support for these came in version 2 with
  • 01:29:26 version 2 and up the format of the file
  • 01:29:29 also changed a little bit
  • 01:29:31 you no longer specify your stack
  • 01:29:34 information directly as you did before
  • 01:29:36 it is all encapsulated in a Services
  • 01:29:39 section so create a property called
  • 01:29:42 services in the root of the file and
  • 01:29:44 then move all the services underneath
  • 01:29:47 that you will still use the same docker
  • 01:29:50 compose up command to bring up your
  • 01:29:52 application stack but how does docker
  • 01:29:55 compose know what version of the file
  • 01:29:58 you're using you're free to use version
  • 01:30:01 1 or version 2 depending on your needs
  • 01:30:03 so how does the docker compose know what
  • 01:30:07 format you are using for version 2 and
  • 01:30:11 up you must specify the version of
  • 01:30:14 docker compose file you are intending to
  • 01:30:16 use by specifying the version at the top
  • 01:30:19 of the file in this case version : 2
  • 01:30:24 another difference is with networking in
  • 01:30:27 version 1 docker compose attaches all
  • 01:30:31 the containers it runs to the default
  • 01:30:34 bridged Network and then use links to
  • 01:30:37 enable communication between the
  • 01:30:39 containers as we did before with version
  • 01:30:42 2 dr. Campos automatically creates a
  • 01:30:46 dedicated bridged Network for this
  • 01:30:48 application and then attaches all
  • 01:30:51 containers to that new network all
  • 01:30:54 containers are then able to communicate
  • 01:30:56 to each other using each other's service
  • 01:30:59 name
  • 01:31:00 so you basically don't need to use links
  • 01:31:03 in version 2 of docker compose you can
  • 01:31:06 simply get rid of all the links you
  • 01:31:08 mentioned in version 1 when you convert
  • 01:31:11 a file from version one to version two
  • 01:31:15 and finally Washington also introduces
  • 01:31:18 it depends on feature if you wish to
  • 01:31:20 specify a start-up order for instance
  • 01:31:23 say the watering web application is
  • 01:31:25 dependent on the Redis service so you
  • 01:31:28 need to ensure that Redis container is
  • 01:31:31 started first and only then the voting
  • 01:31:33 web application must be started we could
  • 01:31:36 add a depends on property to the voting
  • 01:31:38 application and indicate that it is
  • 01:31:41 dependent on Redis then comes version 3
  • 01:31:47 which is the latest as of today version
  • 01:31:50 3 is similar to version 2 in the
  • 01:31:53 structure meaning it has a version
  • 01:31:55 specification at the top and a Services
  • 01:31:57 section under which you put all your
  • 01:31:59 services just like in version 2 make
  • 01:32:03 sure to specify the version number as 3
  • 01:32:05 at the top version 3 comes with support
  • 01:32:09 for docker swamp which we will see later
  • 01:32:11 on there are some options that were
  • 01:32:14 removed and added to see details on
  • 01:32:17 those you can refer to the documentation
  • 01:32:19 section using the link in the reference
  • 01:32:21 page following this lecture we will see
  • 01:32:24 Worsham 3 in much detail later when we
  • 01:32:27 discuss about docker stacks let us talk
  • 01:32:31 about networks in docker compose getting
  • 01:32:35 back to our application so far we have
  • 01:32:38 been just deploying all containers on
  • 01:32:40 the default bridged Network let us say
  • 01:32:44 we modify the architecture a little bit
  • 01:32:46 to contain the traffic from the
  • 01:32:48 different sources for example we would
  • 01:32:51 like to separate the user generated
  • 01:32:53 traffic from the applications internal
  • 01:32:55 traffic so we create a front-end network
  • 01:32:58 dedicated for traffic from users and a
  • 01:33:01 back-end network dedicated for traffic
  • 01:33:04 within the application we then connect
  • 01:33:07 the user facing applications which are
  • 01:33:09 the voting app and the result app to the
  • 01:33:12 front-end network and all the components
  • 01:33:15 to an internal back-end network so back
  • 01:33:21 in our docker compose file note that I
  • 01:33:24 have actually stripped out the port
  • 01:33:26 section for simplicity sake
  • 01:33:28 there's
  • 01:33:29 they're but they're just not shown here
  • 01:33:31 the first thing we need to do if we were
  • 01:33:34 to use networks is to define the
  • 01:33:36 networks we are going to use in our case
  • 01:33:39 we have two networks front end and back
  • 01:33:42 end so create a new property called
  • 01:33:45 networks at the root level adjacent to
  • 01:33:48 the services in the docker compose file
  • 01:33:50 and add a map of networks we are
  • 01:33:53 planning to use then under each service
  • 01:33:57 create a networks property and provide a
  • 01:33:59 list of networks that service must be
  • 01:34:02 attached to in case of Redis and DB it's
  • 01:34:06 only the back-end network in case of the
  • 01:34:09 front-end applications such as at the
  • 01:34:12 voting app and the result app they
  • 01:34:14 require to be a test to both a front-end
  • 01:34:17 and back-end Network you must also add a
  • 01:34:20 section for worker container to be added
  • 01:34:23 to the back-end Network I have just
  • 01:34:25 omitted that in this slide you choose
  • 01:34:27 space constraints now that you have seen
  • 01:34:31 docker compose files head over to the
  • 01:34:33 coding exercises and practice developing
  • 01:34:36 some docker compose files that's it for
  • 01:34:39 this lecture and I will see you in the
  • 01:34:42 next lecture we will now look at docker
  • 01:34:54 registry so what is a registry if the
  • 01:34:59 containers were the rain then they would
  • 01:35:02 rain from the docker registry which are
  • 01:35:04 the clouds
  • 01:35:05 that's where docker images are stored
  • 01:35:08 it's a central repository of all docker
  • 01:35:11 images let's look at a simple nginx
  • 01:35:14 container we run the docker run engine X
  • 01:35:17 command to run an instance of the nginx
  • 01:35:20 image let's take a closer look at that
  • 01:35:23 image name now the name is nginx but
  • 01:35:27 what is this image and where is this
  • 01:35:29 image pulled from this name follows
  • 01:35:32 Dockers image naming convention nginx
  • 01:35:35 here is the image or the repository name
  • 01:35:39 when you say nginx
  • 01:35:41 it's actually nginx / nginx the first
  • 01:35:45 part stands for the user or account name
  • 01:35:48 so if you don't provide an account or a
  • 01:35:51 repository name it assumes that it is
  • 01:35:53 the same as the given name which in this
  • 01:35:55 case is nginx the user names is usually
  • 01:35:59 your docker hub account name or if it is
  • 01:36:01 an organization then it's the name of
  • 01:36:03 the organization if you were to create
  • 01:36:06 your own account and create your own
  • 01:36:08 repositories or images under it then you
  • 01:36:12 would use a similar pattern now where
  • 01:36:15 are these images stored and pulled from
  • 01:36:17 since we have not specified the location
  • 01:36:20 where these images are to be pulled from
  • 01:36:22 it is assumed to be on Dockers default
  • 01:36:25 registry docker hub the dns name for
  • 01:36:29 which is dr. al the registry is where
  • 01:36:32 all the images are stored whenever you
  • 01:36:35 create a new image or update an existing
  • 01:36:37 image you push it to the registry and
  • 01:36:40 every time anyone deploys this
  • 01:36:42 application it is pulled from that
  • 01:36:44 registry there are many other popular
  • 01:36:46 registries as well for example Google's
  • 01:36:49 registry is that VCR that i/o where a
  • 01:36:51 lot of kubernetes related images are
  • 01:36:53 stored like the ones used for performing
  • 01:36:56 end-to-end tests on the cluster these
  • 01:36:58 are all publicly accessible images that
  • 01:37:01 anyone can download and access when you
  • 01:37:05 have applications built in-house that
  • 01:37:07 shouldn't be made available to the
  • 01:37:08 public
  • 01:37:09 hosting an internal private registry may
  • 01:37:12 be a good solution many cloud service
  • 01:37:14 providers such as Arab leaders as your
  • 01:37:16 GCP provide a private registry by
  • 01:37:20 default when you open an account with
  • 01:37:22 them on any of these solutions be a
  • 01:37:25 docker hub or Google registry or your
  • 01:37:28 internal private registry you may choose
  • 01:37:30 to make a repository private so that it
  • 01:37:33 can only be accessed using a set of
  • 01:37:35 credentials from Dockers perspective to
  • 01:37:38 run a container using an image from a
  • 01:37:41 private registry you first log into your
  • 01:37:43 private registry using the docker login
  • 01:37:46 command input your credentials once
  • 01:37:48 successful run the application using
  • 01:37:50 private registry as part of the image
  • 01:37:53 name like this
  • 01:37:54 now if you did not log into the private
  • 01:37:56 registry it will come back saying that
  • 01:37:59 the image cannot be found so remember to
  • 01:38:02 always log in before pulling or pushing
  • 01:38:04 to a private registry we said that cloud
  • 01:38:08 providers like AWS or GCP provide a
  • 01:38:11 private registry when you create an
  • 01:38:13 account with them but what if you are
  • 01:38:15 running your application on-premise and
  • 01:38:17 don't have a private registry how do you
  • 01:38:19 deploy your own private registry within
  • 01:38:22 your organization the docker registry is
  • 01:38:25 itself another application and of course
  • 01:38:28 is available as a docker image the name
  • 01:38:30 of the image is registry and it exposes
  • 01:38:34 the API on port 5,000 now that you have
  • 01:38:37 your custom registry running at port
  • 01:38:40 5,000 on this docker host how do you
  • 01:38:43 push your own image to it use the docker
  • 01:38:47 image tag command to tag the image with
  • 01:38:50 a private registry URL in it in this
  • 01:38:53 case since it's running on the same door
  • 01:38:55 host I can use localhost semi colon
  • 01:38:58 5,000 followed by the image name I can
  • 01:39:02 then push my image to my local private
  • 01:39:04 registry using the command docker push
  • 01:39:07 and the new image name with the docker
  • 01:39:09 registry information in it from there on
  • 01:39:12 I can pull my image from anywhere within
  • 01:39:14 this network using either localhost if
  • 01:39:16 you're on the same host or the IP or
  • 01:39:19 domain name of my docker host if I'm
  • 01:39:22 accessing from another host in my
  • 01:39:24 environment well that's it for this
  • 01:39:27 lecture
  • 01:39:27 hello words of the practice test and
  • 01:39:29 practice working with private docker
  • 01:39:32 registries
  • 01:39:33 [Music]
  • 01:39:42 welcome to this lecture on docker engine
  • 01:39:45 in this lecture we will take a deeper
  • 01:39:47 look at Dockers architecture how it
  • 01:39:50 actually runs applications in isolated
  • 01:39:52 containers and how it works under the
  • 01:39:55 hoods docker engine as we have learned
  • 01:39:59 before is simply referred to a host with
  • 01:40:02 docker installed on it when you install
  • 01:40:04 docker on a Linux host you're actually
  • 01:40:07 installing three different components
  • 01:40:09 the docker demon the rest API server and
  • 01:40:12 the docker CLI the docker daemon is a
  • 01:40:15 background process that manages docker
  • 01:40:18 objects such as the images containers
  • 01:40:21 volumes and networks the docker REST API
  • 01:40:24 server is the API interface that
  • 01:40:26 programs can use to talk to the daemon
  • 01:40:29 and provide instructions you could
  • 01:40:31 create your own tools using this REST
  • 01:40:33 API and the docker CLI is nothing but
  • 01:40:36 the command-line interface that we've
  • 01:40:38 been using until now to perform actions
  • 01:40:41 such as running a container stopping
  • 01:40:43 containers destroying images etc it uses
  • 01:40:47 the REST API to interact with the docker
  • 01:40:50 demon something to note here is that the
  • 01:40:54 docker CLI need not necessarily be on
  • 01:40:57 the same host it could be on another
  • 01:40:59 system like a laptop and can still work
  • 01:41:03 with a remote docker engine simply use
  • 01:41:06 the dash 8 option on the docker command
  • 01:41:09 and specify the remote docker engine
  • 01:41:12 address and a port as shown here for
  • 01:41:15 example to run a container based on ng
  • 01:41:19 INX on a remote docker host run the
  • 01:41:22 command docker dash H equals 10.1 23
  • 01:41:27 2000 call n' to 375 run and the Ln X now
  • 01:41:38 let's try
  • 01:41:38 understand how exactly our applications
  • 01:41:41 containerized in docker how does it work
  • 01:41:44 under the hood
  • 01:41:45 docker uses namespaces to isolate
  • 01:41:48 workspace process IDs network
  • 01:41:51 inter-process communication mounds and
  • 01:41:54 unix time sharing systems are created in
  • 01:41:58 their own namespace thereby providing
  • 01:42:00 isolation between containers let's take
  • 01:42:06 a look at one of the namespace isolation
  • 01:42:08 technique process ID namespaces whenever
  • 01:42:11 a Linux system boots up it starts with
  • 01:42:14 just one process with a process ID of
  • 01:42:16 one this is the root process and kicks
  • 01:42:19 off all the other processes in the
  • 01:42:21 system by the time the system boots up
  • 01:42:24 completely we have a handful of
  • 01:42:26 processors running this can be seen by
  • 01:42:29 running the PS command to list all the
  • 01:42:32 running processes the process IDs are
  • 01:42:35 unique and two processes cannot have the
  • 01:42:38 same process ID now if we were to create
  • 01:42:42 a container which is basically like a
  • 01:42:44 child system within the current system
  • 01:42:46 the child system needs to think that it
  • 01:42:50 is an independent system on its own and
  • 01:42:52 it has its own set of processes
  • 01:42:55 originating from a root process with a
  • 01:42:58 process ID of one but we note that there
  • 01:43:02 is no hard isolation between the
  • 01:43:04 containers and the underlying host so
  • 01:43:06 the processes running inside the
  • 01:43:08 container or in fact processes running
  • 01:43:10 on the underlying host and so two
  • 01:43:12 processes cannot have the same process
  • 01:43:15 ID of one this is where namespaces come
  • 01:43:18 into play with process ID namespaces
  • 01:43:21 each process can have multiple process
  • 01:43:23 IDs associated with it for example when
  • 01:43:26 the processes start in the container
  • 01:43:28 it's actually just another set of
  • 01:43:30 processes on the base Linux system and
  • 01:43:32 it gets the next available process ID in
  • 01:43:35 this case 5 & 6 however they also get
  • 01:43:38 another process ID starting with PID 1
  • 01:43:41 in the container name space which is
  • 01:43:43 only visible inside the container so the
  • 01:43:46 container thinks that it has its own
  • 01:43:48 root process tree and so it is an
  • 01:43:51 independent
  • 01:43:52 system so how does that relate to an
  • 01:43:54 actual system how do you see this on a
  • 01:43:57 host let's say I were to run an ng I
  • 01:44:00 next server as a container we know that
  • 01:44:03 the ng r NX container
  • 01:44:04 runs an ng Aria next service if we were
  • 01:44:08 to list all the services inside the
  • 01:44:10 docker container we see that the ng
  • 01:44:13 ionic service running with a process ID
  • 01:44:15 of one this is the process ID of the
  • 01:44:18 service inside of the container
  • 01:44:20 namespace if we list the services on the
  • 01:44:24 docker host we will see the same service
  • 01:44:26 but with a different process ID that
  • 01:44:30 indicates that all processes are in fact
  • 01:44:33 running on the same host but separated
  • 01:44:36 into their own containers using
  • 01:44:38 namespaces so we learned that the
  • 01:44:43 underlying docker host as well as the
  • 01:44:45 containers share the same system
  • 01:44:47 resources such as CPU and memory how
  • 01:44:51 much of the resources are dedicated to
  • 01:44:53 the host and the containers and how does
  • 01:44:56 docker manage and share the resources
  • 01:44:58 between the containers by default there
  • 01:45:01 is no restriction as to how much of a
  • 01:45:04 resource a container can use and hence a
  • 01:45:07 container may end up utilizing all of
  • 01:45:09 the resources on the underlying host but
  • 01:45:13 there is a way to restrict the amount of
  • 01:45:15 CPU or memory a container can use docker
  • 01:45:19 uses C groups or control groups to
  • 01:45:21 restrict the amount of hardware
  • 01:45:23 resources allocated to each container
  • 01:45:26 this can be done by providing the – –
  • 01:45:30 CPUs option to the docker run command
  • 01:45:33 providing a value of 0.5 will ensure
  • 01:45:36 that the container does not take up more
  • 01:45:39 than 50% of the host CPU at any given
  • 01:45:42 time the same goes with memory setting a
  • 01:45:45 value of 100 m to the – – memory option
  • 01:45:49 limits the amount of memory the
  • 01:45:51 container can use to a hundred megabytes
  • 01:45:54 if you are interested in reading more on
  • 01:45:57 this topic refer to the links I posted
  • 01:46:00 in the reference page that's it for now
  • 01:46:03 on docker engine in the
  • 01:46:05 lecture we talk about other advanced
  • 01:46:07 topics on docker storage and file
  • 01:46:09 systems see you in the next layer in
  • 01:46:22 this course we learned that containers
  • 01:46:24 share the underlying OS kernel and as a
  • 01:46:26 result we cannot have a windows
  • 01:46:28 container running on Linux host or
  • 01:46:31 vice-versa we need to keep this in mind
  • 01:46:33 while going through this lecture as it's
  • 01:46:36 very important concept and most
  • 01:46:37 beginners tend to have an issue with it
  • 01:46:40 so what are the options available for
  • 01:46:42 docker on Windows there are two options
  • 01:46:45 available the first one is darker on
  • 01:46:48 Windows using docker toolbox and the
  • 01:46:51 second one is the docker desktop for
  • 01:46:54 Windows option we will look at each of
  • 01:46:56 these now let's take a look at the first
  • 01:46:59 option docker toolbox this was the
  • 01:47:02 original support for docker on Windows
  • 01:47:04 imagine that you have a Windows laptop
  • 01:47:07 and no access to any Linux system
  • 01:47:09 whatsoever but you would like to try
  • 01:47:11 docker you don't have access to a Linux
  • 01:47:14 system in the lab or in the cloud what
  • 01:47:17 would you do what I did was to install a
  • 01:47:20 virtualization software on my Windows
  • 01:47:22 system like Oracle VirtualBox or we my
  • 01:47:24 workstation and deploy a Linux VM on it
  • 01:47:26 such as Ubuntu or Debian then install
  • 01:47:30 docker on the Linux VM and then play
  • 01:47:32 around with it this is what the first
  • 01:47:35 option really does it doesn't really
  • 01:47:38 have anything much to do with Windows
  • 01:47:41 you cannot create Windows based docker
  • 01:47:43 images or run Windows based docker
  • 01:47:45 containers you obviously cannot run
  • 01:47:48 Linux container directly on Windows
  • 01:47:50 either you're just working with docker
  • 01:47:52 on a Linux virtual machine on a Windows
  • 01:47:55 host docker however provides us with a
  • 01:47:58 set of tools to make this easy which is
  • 01:48:01 called as the docker toolbox the docker
  • 01:48:04 toolbox contains a set of tools like
  • 01:48:06 Oracle VirtualBox dr. engine docker
  • 01:48:09 machine docker compose and a user
  • 01:48:11 interface called kite Matic this will
  • 01:48:14 help you get started by simply
  • 01:48:16 downloading and running the docker
  • 01:48:17 toolbox executed
  • 01:48:19 I will install worship works deploy a
  • 01:48:21 lightweight VM called boot to docker
  • 01:48:24 which has darker running in it already
  • 01:48:26 so that you are all set to start with
  • 01:48:28 docker easily and with within a short
  • 01:48:30 period of time
  • 01:48:31 now what about requirements you must
  • 01:48:34 ensure that your operating system is
  • 01:48:36 64-bit Windows 7 or higher and that the
  • 01:48:40 virtualization is enabled on the system
  • 01:48:42 now remember docker to box is a legacy
  • 01:48:45 solution for all the Windows systems
  • 01:48:47 that do not meet requirements for the
  • 01:48:49 newer docker for Windows option the
  • 01:48:53 second option is the newer an option
  • 01:48:56 called docker desktop for Windows in the
  • 01:48:58 previous option we saw that we had
  • 01:49:00 Oracle VirtualBox installed on Windows
  • 01:49:03 and then a Linux system and then docker
  • 01:49:05 on that Linux system now with docker for
  • 01:49:08 Windows we take out Oracle VirtualBox
  • 01:49:10 and use the native virtualization
  • 01:49:12 technology available with Windows called
  • 01:49:14 Microsoft hyper-v during the
  • 01:49:16 installation process for docker for
  • 01:49:18 Windows it will still automatically
  • 01:49:20 create a Linux system underneath but
  • 01:49:22 this time it is created on the Microsoft
  • 01:49:24 hyper-v instead of Oracle VirtualBox and
  • 01:49:27 have docker running on that system
  • 01:49:29 because of this dependency on hyper-v
  • 01:49:31 this option is only supported for
  • 01:49:33 Windows 10 enterprise or professional
  • 01:49:35 Edition and on Windows Server 2016
  • 01:49:38 because both these operating systems
  • 01:49:40 come with hyper-v support by default now
  • 01:49:44 here is the most important point so far
  • 01:49:46 whatever we have been discussing with
  • 01:49:49 Dockers support for Windows it is
  • 01:49:51 strictly for Linux containers Linux
  • 01:49:54 applications packaged into Linux docker
  • 01:49:56 images we're not talking about Windows
  • 01:49:59 applications or Windows images or
  • 01:50:01 windows containers both the options we
  • 01:50:03 just discussed will help you run a Linux
  • 01:50:06 container on a Windows host with Windows
  • 01:50:10 Server 2016 Microsoft announced support
  • 01:50:13 for Windows containers for the first
  • 01:50:15 time you can now packaged applications
  • 01:50:18 Windows applications into Windows docker
  • 01:50:20 containers and run them on Windows
  • 01:50:22 docker host using docker desktop for
  • 01:50:25 Windows when you install docker desktop
  • 01:50:28 for Windows the default option is to
  • 01:50:31 work with Linux containers
  • 01:50:32 but if you would like to run Windows
  • 01:50:34 containers then you must explicitly
  • 01:50:37 configure docker for Windows to switch
  • 01:50:40 to using Windows containers in early
  • 01:50:44 2016 Microsoft announced Windows
  • 01:50:46 containers now you could create Windows
  • 01:50:49 based images and run Windows containers
  • 01:50:51 on a Windows server just like how you
  • 01:50:52 would run Linux containers on a Linux
  • 01:50:55 system now you can create Windows images
  • 01:50:58 container as your applications and share
  • 01:51:00 them through the docker store as well
  • 01:51:03 unlike in Linux there are two types of
  • 01:51:06 containers in Windows the first one is a
  • 01:51:09 Windows Server container which works
  • 01:51:11 exactly like Linux containers where the
  • 01:51:13 OS kernel is shared with the underlying
  • 01:51:16 operating system to allow better
  • 01:51:18 security boundary between containers and
  • 01:51:20 to a lot of kernels with different
  • 01:51:22 versions and configurations to coexist
  • 01:51:24 the second option was introduced known
  • 01:51:27 as the hyper-v isolation with hyper-v
  • 01:51:30 isolation each container is run within a
  • 01:51:33 highly optimized virtual machine
  • 01:51:35 guaranteeing complete kernel isolation
  • 01:51:38 between the containers and the
  • 01:51:39 underlying host now while in the Linux
  • 01:51:42 world you had a number of base images
  • 01:51:44 for a Linux system such as Ubuntu debian
  • 01:51:47 fedora Alpine etc if you remember that
  • 01:51:50 that is what you specify at the
  • 01:51:52 beginning of the docker file in the
  • 01:51:54 windows world we have two options the
  • 01:51:56 Windows server core and nano server a
  • 01:51:59 nano server is a headless deployment
  • 01:52:02 option for Windows Server which runs at
  • 01:52:04 a fraction of size of the full operating
  • 01:52:06 system you can think of this like the
  • 01:52:09 Alpine image in Linux the windows server
  • 01:52:12 core though is not as light weight as
  • 01:52:15 you might expect it to be finally
  • 01:52:18 windows containers are supported on
  • 01:52:20 Windows Server 2016 nano server and
  • 01:52:23 windows 10 professional and Enterprise
  • 01:52:25 Edition remember on Windows 10
  • 01:52:28 professional and Enterprise Edition only
  • 01:52:30 supports hyper-v isolated containers
  • 01:52:33 meaning as we just discussed every
  • 01:52:35 container deployed is deployed on a
  • 01:52:38 highly optimized virtual machine well
  • 01:52:41 that's it about docker on windows now
  • 01:52:44 before I finish I want to
  • 01:52:46 point out one important fact we saw two
  • 01:52:49 ways of running a docker container using
  • 01:52:51 VirtualBox or hyper week but remember
  • 01:52:53 VirtualBox and hyper-v cannot coexist on
  • 01:52:56 the same Windows host so if you started
  • 01:52:58 off with docker toolbox with VirtualBox
  • 01:53:00 and if you plan to migrate to hyper-v
  • 01:53:03 remember you cannot have both solutions
  • 01:53:05 at the same time there is a migration
  • 01:53:06 and guide available on docker
  • 01:53:08 documentation page on how to migrate
  • 01:53:10 from Marshall box to hyper wait that's
  • 01:53:14 it for now
  • 01:53:14 thank you and I will see you the next
  • 01:53:16 lecture we now look at docker on Mac
  • 01:53:28 Locker on Mac is similar to docker on
  • 01:53:31 Windows there are two options to get
  • 01:53:33 started
  • 01:53:34 docker on Mac using docker toolbox or
  • 01:53:36 docker Desktop for Mac option let's look
  • 01:53:40 at the first option docker toolbox this
  • 01:53:43 was the original support for docker on
  • 01:53:45 Mac it is docker on a Linux VM created
  • 01:53:49 using VirtualBox on Mac as with Windows
  • 01:53:52 it has nothing to do with Mac
  • 01:53:53 applications or Mac based images or Mac
  • 01:53:55 containers it purely runs Linux
  • 01:53:58 containers on a Mac OS dog or toolbox
  • 01:54:01 contains a set of tools like Oracle
  • 01:54:03 VirtualBox docker and Jain docker
  • 01:54:05 machine docker compose and a user
  • 01:54:07 interface called CAD Matic when you
  • 01:54:09 download and install the docker toolbox
  • 01:54:12 executable it installs VirtualBox
  • 01:54:14 deploys lightweight VM called boot a
  • 01:54:17 docker which has dr running in it
  • 01:54:18 already
  • 01:54:19 this requires mac OS 10.8 or newer the
  • 01:54:25 second option is the newer option called
  • 01:54:27 docker Desktop for Mac with docker
  • 01:54:30 Desktop for Mac we take out or
  • 01:54:32 commercial box and use hyper cat
  • 01:54:34 virtualization technology during the
  • 01:54:37 installation process for docker for Mac
  • 01:54:39 it will still automatically create a
  • 01:54:42 Linux system underneath but this time it
  • 01:54:44 is created on hyper kit instead of
  • 01:54:46 Oracle VirtualBox and have dr. running
  • 01:54:49 on that system
  • 01:54:50 this requires Mac OS Sierra 10 or 12 or
  • 01:54:54 newer and Martin and the Mac hardware
  • 01:54:57 must be 2010 or newer model
  • 01:55:00 finally remember that all of this is to
  • 01:55:03 be able to run Linux container on Mac as
  • 01:55:06 of this recording there are no Mac based
  • 01:55:08 images or containers well that's it with
  • 01:55:11 docker on Mac for now we will now try to
  • 01:55:24 understand what container orchestration
  • 01:55:26 is so far in this course we've seen that
  • 01:55:30 with docker you can run a single
  • 01:55:32 instance of the application with a
  • 01:55:34 simple docker run command in this case
  • 01:55:37 to run a node.js based application
  • 01:55:39 you're on the docker run nodejs command
  • 01:55:42 but that's just one instance of your
  • 01:55:44 application on one docker host what
  • 01:55:47 happens when the number of users
  • 01:55:48 increase and that instance is no longer
  • 01:55:51 able to handle the load you deploy
  • 01:55:54 additional instance of your application
  • 01:55:56 by running the docker run command
  • 01:55:57 multiple times so that's something you
  • 01:56:00 have to do yourself you have to keep a
  • 01:56:02 close watch on the load and performance
  • 01:56:04 of your application and deploy
  • 01:56:05 additional instances yourself and not
  • 01:56:08 just that you have to keep a close watch
  • 01:56:10 on the health of these applications and
  • 01:56:13 if a container was to fail you should be
  • 01:56:16 able to detect that and run the docker
  • 01:56:18 run commander game to deploy another
  • 01:56:20 instance of that application what about
  • 01:56:22 the health of the docker host itself
  • 01:56:24 what if the host crashes and is
  • 01:56:27 inaccessible the containers hosted on
  • 01:56:30 that host become inaccessible too so
  • 01:56:33 what do you do in order to solve these
  • 01:56:35 issues you will need a dedicated
  • 01:56:38 engineer who can sit and monitor the
  • 01:56:41 state performance and health of the
  • 01:56:43 containers and take necessary actions to
  • 01:56:45 remediate the situation but when you
  • 01:56:47 have large applications deployed with
  • 01:56:49 tens of thousands of containers that's
  • 01:56:51 that's not a practical approach so you
  • 01:56:55 can build your own scripts and and that
  • 01:56:57 will help you tackle these issues to
  • 01:57:00 some extent container orchestration is
  • 01:57:04 just a solution for that it is a
  • 01:57:06 solution that consists of a set of tools
  • 01:57:09 and scripts that can help host
  • 01:57:11 containers in a production
  • 01:57:13 environment typically a container
  • 01:57:15 orchestration solution consists of
  • 01:57:18 multiple docker host that can host
  • 01:57:20 containers that way even if one fails
  • 01:57:23 the application is still accessible
  • 01:57:25 through the others the container
  • 01:57:27 orchestration solution easily allows you
  • 01:57:30 to deploy hundreds or thousands of
  • 01:57:32 instances of your application with a
  • 01:57:34 single command this is a command used
  • 01:57:38 for docker swarm we will look at the
  • 01:57:40 command itself in a bit
  • 01:57:42 some orchestration solutions can help
  • 01:57:44 you automatically scale up the number of
  • 01:57:47 instances when users increase and scale
  • 01:57:49 down the number of instances when the
  • 01:57:51 demand decreases some solutions can even
  • 01:57:54 help you in automatically adding
  • 01:57:56 additional hosts to support the user
  • 01:57:58 load and not just clustering and scaling
  • 01:58:01 the container orchestration solutions
  • 01:58:03 also provide support for advanced
  • 01:58:05 networking between these containers
  • 01:58:07 across different hosts as well as load
  • 01:58:10 balancing user requests across different
  • 01:58:12 house they also provide support for
  • 01:58:14 sharing storage between the house as
  • 01:58:16 well as support for configuration
  • 01:58:18 management and security within the
  • 01:58:20 cluster there are multiple container
  • 01:58:22 orchestration solutions available today
  • 01:58:24 docker has daher suam kubernetes from
  • 01:58:28 Google and mezzo mezz from Paget
  • 01:58:31 well docker swarm is really easy to set
  • 01:58:34 up and get started it lacks some of the
  • 01:58:36 advanced auto scaling features required
  • 01:58:38 for complex production grade
  • 01:58:40 applications mezzos on the other hand is
  • 01:58:43 quite difficult to set up and get
  • 01:58:45 started
  • 01:58:46 but supports many advanced features
  • 01:58:48 kubernetes arguably the most popular of
  • 01:58:52 it all is a bit difficult to set up and
  • 01:58:54 get started but provides a lot of
  • 01:58:56 options to customize deployments and has
  • 01:58:59 support for many different vendors
  • 01:59:01 kubernetes is now supported on all
  • 01:59:04 public cloud service providers like GCP
  • 01:59:07 azure and AWS and the kubernetes project
  • 01:59:10 is one of the top-ranked projects on
  • 01:59:12 github in the upcoming lectures we will
  • 01:59:15 take a quick look at docker swamp and
  • 01:59:17 kubernetes
  • 01:59:19 [Music]
  • 01:59:28 we will now get a quick introduction to
  • 01:59:31 docker swarm
  • 01:59:33 docker swarm has a lot of concepts to
  • 01:59:36 cover and requires its own course but we
  • 01:59:39 will try to take a quick look at some of
  • 01:59:41 the basic details so you can get a brief
  • 01:59:43 idea on what it is with docker swamp you
  • 01:59:47 could now combine multiple docker
  • 01:59:49 machines together into a single cluster
  • 01:59:51 docker swarm will take care of
  • 01:59:53 distributing your services or your
  • 01:59:55 application instances into separate
  • 01:59:58 hosts for high availability and for load
  • 02:00:00 balancing across different systems in
  • 02:00:03 hardware to set up a docker swamp you
  • 02:00:06 must first have hosts or multiple hosts
  • 02:00:08 with docker installed on them then you
  • 02:00:10 must designate one host to be the
  • 02:00:13 manager or the master or the swamp
  • 02:00:16 manager as it is called and others as
  • 02:00:18 slaves or workers once you're done with
  • 02:00:21 that run the docker swarm init command
  • 02:00:23 on the swarm manager and that will
  • 02:00:25 initialize the swamp manager the output
  • 02:00:28 will also provide the command to be run
  • 02:00:30 on the workers so copy the command and
  • 02:00:33 run it on the worker nodes to join the
  • 02:00:35 manager after joining the swamp the
  • 02:00:37 workers are also referred to as nodes
  • 02:00:40 and you're now ready to create services
  • 02:00:42 and deploy them on the swamp cluster so
  • 02:00:46 let's get into some more details as you
  • 02:00:49 already know to run an instance of my
  • 02:00:51 web server
  • 02:00:52 I run the docker run command and specify
  • 02:00:55 the name of the image I wish to run this
  • 02:00:58 creates a new container instance of my
  • 02:01:00 application and serves my web server now
  • 02:01:04 that we have learned how to create a
  • 02:01:05 swamp cluster how do I utilize my
  • 02:01:07 cluster to run multiple instances of my
  • 02:01:10 web server now one way to do this would
  • 02:01:12 be to run the docker run command on each
  • 02:01:14 worker node but that's not ideal as I
  • 02:01:17 might have to log into each node and run
  • 02:01:19 this command and there there could be
  • 02:01:21 hundreds of nodes I will have to set up
  • 02:01:23 load balancing myself a large monitor
  • 02:01:25 the state of each instance myself and if
  • 02:01:28 instances were to fail I'll have to
  • 02:01:30 restart them myself so it's going to be
  • 02:01:32 an impossible
  • 02:01:33 tasks that is where dr. Swan
  • 02:01:35 orchestration consent dr. Swan
  • 02:01:38 Orchestrator does all of this for us so
  • 02:01:41 far we've only set up this one cluster
  • 02:01:43 but we haven't seen orchestration in
  • 02:01:45 action the key component of suam
  • 02:01:48 orchestration is the docker service
  • 02:01:50 docker services are one or more
  • 02:01:53 instances of a single application or
  • 02:01:56 service that runs across to saw the
  • 02:01:58 nodes in the Swan cluster for example in
  • 02:02:01 this case I could create a docker
  • 02:02:02 service to run multiple instances of my
  • 02:02:05 web server application across worker
  • 02:02:07 nodes in my swamp cluster for this
  • 02:02:10 around the docker service create command
  • 02:02:12 on the manager node and specify my image
  • 02:02:15 name there which is my web server in
  • 02:02:18 this case and use the option replicas to
  • 02:02:21 specify the number of instances of my
  • 02:02:23 web server I would like to run across
  • 02:02:25 the cluster since I specified three
  • 02:02:28 replicas and I get three instances of my
  • 02:02:31 web server distributed across the
  • 02:02:33 different worker nodes remember the
  • 02:02:36 docker service command must be run on
  • 02:02:38 the manager node and not on the worker
  • 02:02:40 node the docker service create command
  • 02:02:43 is similar to the docker run command in
  • 02:02:45 terms of the options passed such as the
  • 02:02:47 – II environment variable the – key for
  • 02:02:51 publishing ports the network option to
  • 02:02:54 attach container to a network etc well
  • 02:02:57 that's a high-level introduction to dr.
  • 02:02:59 Swan there's a lot more to know such as
  • 02:03:02 configuring multiple managers overlay
  • 02:03:04 networks etc as I mentioned it requires
  • 02:03:06 its own separate course well that's it
  • 02:03:09 for now in the next lecture we will look
  • 02:03:12 at kubernetes at a higher level
  • 02:03:15 [Music]
  • 02:03:24 we will now get a brief introduction to
  • 02:03:26 basic kubernetes concepts again
  • 02:03:29 kubernetes requires its own course well
  • 02:03:32 a few courses at least five but we will
  • 02:03:35 try to get a brief introduction to it
  • 02:03:37 here with docker you were able to run a
  • 02:03:40 single instance of an application using
  • 02:03:43 the docker CLI by running the docker run
  • 02:03:46 command which is grid running an
  • 02:03:48 application has never been so easy
  • 02:03:51 before with kubernetes using the
  • 02:03:54 kubernetes CLI known as cube control you
  • 02:03:57 can run a thousand instance of the same
  • 02:04:00 application with a single command
  • 02:04:02 kubernetes can scale it up to two
  • 02:04:05 thousand with another command kubernetes
  • 02:04:07 can be even configured to do this
  • 02:04:10 automatically so that instances and the
  • 02:04:12 infrastructure itself can scale up and
  • 02:04:15 down based on user load kubernetes can
  • 02:04:18 upgrade these 2000 instances of the
  • 02:04:21 application in a rolling upgrade fashion
  • 02:04:24 one at a time with a single command if
  • 02:04:27 something goes wrong it can help you
  • 02:04:29 roll back these images with a single
  • 02:04:31 command kubernetes can help you test new
  • 02:04:34 features of your application by only
  • 02:04:37 upgrading a percentage of these
  • 02:04:39 instances through a be testing methods
  • 02:04:41 the kubernetes open architecture
  • 02:04:44 provides support for many many different
  • 02:04:47 network and storage vendors any network
  • 02:04:50 or storage brand that you can think of
  • 02:04:52 has a plugin for kubernetes kubernetes
  • 02:04:55 supports a variety of authentication and
  • 02:04:58 authorization mechanisms all major cloud
  • 02:05:01 service providers have native support
  • 02:05:03 for kubernetes so what's the relation
  • 02:05:06 between doctor and kubernetes well
  • 02:05:09 kubernetes uses docker host to host
  • 02:05:12 applications in the form of docker
  • 02:05:14 containers well it need not be docker
  • 02:05:17 all the time kubernetes supports
  • 02:05:19 alternatives to Dockers as well such as
  • 02:05:22 rocket or a cryo
  • 02:05:24 but let's take a quick look at the
  • 02:05:26 kubernetes architecture a kubernetes
  • 02:05:28 cluster consists of a set of nodes let
  • 02:05:32 us start with nodes a node is a machine
  • 02:05:35 physical or virtual on which a cobranet
  • 02:05:37 is the kubernetes software a set of
  • 02:05:39 tools are installed a node is a worker
  • 02:05:42 machine and that is where containers
  • 02:05:45 will be launched by kubernetes but what
  • 02:05:47 if the node on which the application is
  • 02:05:49 running fails well obviously our
  • 02:05:51 application goes down so you need to
  • 02:05:53 have more than one nodes a cluster is a
  • 02:05:56 set of nodes grouped together this way
  • 02:05:59 even if one node fails you have your
  • 02:06:01 application still accessible from the
  • 02:06:03 other nodes now we have a cluster but
  • 02:06:07 who is responsible for managing this
  • 02:06:09 cluster where is the information about
  • 02:06:11 the members of the cluster stored and
  • 02:06:13 how are the nodes monitored when a node
  • 02:06:16 fails how do you move the workload of
  • 02:06:18 the failed nodes to another worker node
  • 02:06:20 that's where the master comes in the
  • 02:06:23 master is a note with the kubernetes
  • 02:06:25 control plane components installed the
  • 02:06:29 master watches over the notes are in the
  • 02:06:31 cluster and is responsible for the
  • 02:06:33 actual orchestration of containers on
  • 02:06:36 the worker nodes when you install
  • 02:06:38 kubernetes on a system you're actually
  • 02:06:41 installing the following components an
  • 02:06:43 API server and EDD server a cubelet
  • 02:06:47 service contain a runtime engine like
  • 02:06:51 docker and a bunch of controllers and
  • 02:06:53 the scheduler the API server acts as the
  • 02:06:57 front end for kubernetes the users
  • 02:06:59 management devices command line
  • 02:07:01 interfaces all talk to the API server to
  • 02:07:04 interact with the kubernetes cluster
  • 02:07:06 next is the Etsy be a key value store
  • 02:07:09 the Etsy D is a distributed reliable key
  • 02:07:11 value store used by kubernetes to store
  • 02:07:14 all data used to manage the cluster
  • 02:07:16 think of it this way when you have
  • 02:07:18 multiple nodes and multiple masters in
  • 02:07:21 your cluster Etsy D stores all that
  • 02:07:23 information on all the nodes in the
  • 02:07:25 cluster in a distributed manner NCD is
  • 02:07:28 responsible for implementing locks
  • 02:07:30 within the cluster to ensure there are
  • 02:07:32 no conflicts between the masters the
  • 02:07:35 scheduler is responsible for
  • 02:07:37 distributing work
  • 02:07:38 containers across multiple notes it
  • 02:07:40 looks for newly created containers and
  • 02:07:42 assigns them to notes the controllers
  • 02:07:45 are the brain behind orchestration
  • 02:07:48 they're responsible for noticing and
  • 02:07:50 responding when notes containers or
  • 02:07:52 endpoints goes down the controllers
  • 02:07:55 makes decisions to bring up new
  • 02:07:57 containers in such cases the container
  • 02:08:00 runtime is the underlying software that
  • 02:08:02 is used to run containers in our case it
  • 02:08:05 happens to be docker and finally cubelet
  • 02:08:08 is the agent that runs on each node in
  • 02:08:10 the cluster the agent is responsible for
  • 02:08:13 making sure that the containers are
  • 02:08:15 running on the notes as expected and
  • 02:08:17 finally we also need to learn a little
  • 02:08:20 bit about one of the command-line
  • 02:08:22 utilities known as the cube command-line
  • 02:08:24 tool or the cube control tool or cube
  • 02:08:27 cuddle as it is also called the cube
  • 02:08:30 control tool is the kubernetes CLI which
  • 02:08:32 is used to deploy and manage
  • 02:08:34 applications on a kubernetes cluster to
  • 02:08:37 get cluster related information to get
  • 02:08:39 the status with the nodes in the cluster
  • 02:08:40 and many other things the cube control
  • 02:08:44 run command is used to deploy an
  • 02:08:46 application on the cluster the cube
  • 02:08:48 control cluster info command is used to
  • 02:08:50 view information about the cluster and
  • 02:08:52 the cube control get nodes command is
  • 02:08:54 used to list all the nodes part of the
  • 02:08:56 cluster so to run hundreds of instances
  • 02:09:00 of your application across hundreds of
  • 02:09:02 nodes all I need is a single kubernetes
  • 02:09:05 command like this well that's all we
  • 02:09:08 have for now a quick introduction to
  • 02:09:10 Cornelis and this architecture we
  • 02:09:12 currently have three courses on code
  • 02:09:15 cloud on kubernetes that will take you
  • 02:09:18 from the absolute beginner to a
  • 02:09:20 certified expert so have a look at it
  • 02:09:23 when you get a chance
  • 02:09:25 [Music]
  • 02:09:34 so we're at the end of this beginners
  • 02:09:36 course to docker I hope you had a great
  • 02:09:39 learning experience if so please leave a
  • 02:09:41 comment below if you like my way of
  • 02:09:44 teaching you will love my other courses
  • 02:09:46 hosted on my site at code cloud we have
  • 02:09:49 courses for docker swarm kubernetes
  • 02:09:51 advanced courses on kubernetes
  • 02:09:54 certifications as well as OpenShift we
  • 02:09:57 have courses for automation tools like
  • 02:09:59 angelo chef and puppet and many more on
  • 02:10:02 the way with it code cloud at
  • 02:10:06 www.kencostore.com or
  • 02:10:13 [Music]