- 00:00:01 now maybe you're in this course because
- 00:00:04 you want to ace coding interviews and
- 00:00:06 that would be totally fine though
- 00:00:08 discourses for everyone who's interested
- 00:00:10 in algorithms whatever your reasons
- 00:00:12 might be but if we're briefly sticking
- 00:00:15 to that interview situation you might
- 00:00:18 get an interview question like this one
- 00:00:21 you got a list of items where every item
- 00:00:24 has a value and the weight and then you
- 00:00:29 got a bag which has a maximum capacity a
- 00:00:32 maximum weight it can hold off acts and
- 00:00:35 that could be any number in kilograms
- 00:00:38 anything like that so now your goal is
- 00:00:41 to write a program that basically
- 00:00:43 calculates the items that should go into
- 00:00:47 the back to maximize the value of the
- 00:00:49 items whilst of course ensuring that you
- 00:00:52 don't exceed that maximum back weight
- 00:00:55 so that capacity of the weight and
- 00:00:57 here's an example with some numbers you
- 00:01:00 might be getting or you might then
- 00:01:02 derive on your own a list of items let's
- 00:01:05 say ABC and of course that could be
- 00:01:08 concrete items like bread and butter and
- 00:01:11 whatever but here it's ABC and every
- 00:01:13 item has a value and a weight and then
- 00:01:16 we have this max weight let's say it's
- 00:01:19 eight and our goal is to derive a bag
- 00:01:22 and derive what should be in the back
- 00:01:24 and in this very simple example here
- 00:01:26 that would be the items a and C because
- 00:01:30 combined they have a value of 13 and a
- 00:01:34 weight of 6 so the weight is below that
- 00:01:37 weight of 8 and the value is a pretty
- 00:01:40 big one now we could have also just put
- 00:01:43 item B into the back since that also has
- 00:01:47 a weight of 8 so it would fit in but
- 00:01:49 with Dad we would only get a value of 6
- 00:01:52 so dad wouldn't be the maximum now to us
- 00:01:55 humans that's a pretty straightforward
- 00:01:57 problem and we can solve this pretty
- 00:01:59 quickly but if you would need to write a
- 00:02:03 program if you would need to write code
- 00:02:06 that solve said you would probably
- 00:02:08 quickly reach a point where this becomes
- 00:02:11 difficult
- 00:02:12 now this task here is a very popular
- 00:02:15 problem which is called the Kanab sac
- 00:02:18 problem and it's often asked in
- 00:02:21 interviews throughout the course we will
- 00:02:23 solve it we're not at this point yet but
- 00:02:26 we will solve it throughout the course
- 00:02:27 but I already wanted to put it out there
- 00:02:30 that's the key knapsack problem and
- 00:02:32 we'll revisit it later now the question
- 00:02:36 is why are companies asking for
- 00:02:39 something like this it's probably not a
- 00:02:42 problem you will need to solve when you
- 00:02:44 work at Google or Facebook right there
- 00:02:47 users don't pack bags well actually you
- 00:02:51 might need to solve similar problems if
- 00:02:54 you're responsible at Facebook for
- 00:02:57 coding or for working on the part where
- 00:03:01 users can create groups with a maximum
- 00:03:04 number of participants and so on you
- 00:03:07 might actually have similar problems but
- 00:03:09 most importantly questions like this are
- 00:03:12 asked to check your problem-solving
- 00:03:16 skills because as a developer you need
- 00:03:19 to be able to solve problems you're
- 00:03:22 getting hired to solve problems and even
- 00:03:25 if you're working on your own as a
- 00:03:26 freelancer you are solving problems or
- 00:03:29 if you're working on your own private
- 00:03:31 projects or on your own startup you're
- 00:03:34 also solving problems as a developer
- 00:03:36 you're always solving problems and with
- 00:03:40 that let's take a step back and have a
- 00:03:43 look at the term algorithm what does
- 00:03:46 this actually mean and why do we need
- 00:03:48 algorithms now what is an algorithm an
- 00:03:51 algorithm in simple words is just a
- 00:03:56 sequence of steps of instructions you
- 00:03:58 could say to solve a clearly defined
- 00:04:01 problem
- 00:04:04 this knapsack problem we would have to
- 00:04:07 come up with code with an algorithm
- 00:04:10 therefore which gets a clearly defined
- 00:04:12 input which has the goal of solving a
- 00:04:15 clearly defined problem maximizing the
- 00:04:17 value and keeping that weight in mind
- 00:04:20 and then you have to come up with steps
- 00:04:23 that solve that problem and that's what
- 00:04:25 an algorithm is about we have these two
- 00:04:28 key parts a sequence of steps and a
- 00:04:31 clearly defined problem and these are
- 00:04:34 the two things that in the end make up
- 00:04:36 an algorithm and the same steps with the

- 00:04:39 same input should always lead to the
- 00:04:41 same solution to a problem that's the
- 00:04:43 idea behind an algorithm and therefore
- 00:04:46 of course every program you're writing
- 00:04:48 is an algorithm the entire code all code
- 00:04:51 combined in a program is an algorithm
- 00:04:54 and of course such programs can be split
- 00:04:56 up in many smaller algorithms if you add
- 00:05:00 a click listener to a button and on
- 00:05:02 every click you add a number to another
- 00:05:05 number you also have an algorithm there
- 00:05:07 it's all an algorithm if you want to
- 00:05:09 think of it like that
- 00:05:11 and as a programmer as I mentioned your
- 00:05:13 goal is to solve problems you need to be
- 00:05:16 able to do that and of course you need
- 00:05:19 to be able to do that efficiently which
- 00:05:21 means for a given problem you want to
- 00:05:24 find the best possible solution and of
- 00:05:27 course there are different ways of
- 00:05:28 defining the best possible solution and
- 00:05:32 now of course the question is what is
- 00:05:34 the best possible solution to a given
- 00:05:36 problem is it the algorithm that has the
- 00:05:40 minimum amount of code so the least
- 00:05:43 amount of code is it the algorithm that
- 00:05:46 takes the least amount of time to finish
- 00:05:49 so that has the best performance is it
- 00:05:52 the algorithm that uses the least amount
- 00:05:55 of memory so that it doesn't take up a
- 00:05:58 lot of resources on the computer or is
- 00:06:01 it simply your personal preference
- 00:06:03 because you like code snippet a better
- 00:06:05 than code snippet B well of course we
- 00:06:08 could discuss about that the best
- 00:06:10 possible solution very much depends on
- 00:06:13 the circumstances in which you're
- 00:06:14 working but often it will be the best
- 00:06:17 for
- 00:06:18 Foreman's you're looking for that is
- 00:06:20 typically what you would consider the
- 00:06:23 best possible solution the algorithm
- 00:06:26 with the best performance so that takes
- 00:06:28 the least amount of time and with that
- 00:06:32 the question of courses how do we
- 00:06:35 measure performance because if we can't
- 00:06:38 measure it we can't judge it
- 00:06:43 now in order to find out what the best
- 00:06:46 possible algorithm is for a given
- 00:06:49 problem we need to be able to measure
- 00:06:52 performance now let's see how we could
- 00:06:54 measure it and for that let's take an
- 00:06:56 example problem your goal is to write a
- 00:07:00 function that takes one number as an
- 00:07:03 input and then builds the sum of all the
- 00:07:06 numbers leading up to that number this
- 00:07:09 function could look like that it's a
- 00:07:11 fairly trivial function we get a number
- 00:07:14 as an input n and then we have a result
- 00:07:18 0 initially we have a for loop where we
- 00:07:21 simply go through all the numbers up to
- 00:07:23 n including n we add that to the result
- 00:07:27 and we return the result in the end
- 00:07:30 we can simply write this together here
- 00:07:33 I'm simply doing this in the chrome
- 00:07:35 developer tools in the JavaScript
- 00:07:37 console there because for the moment
- 00:07:39 that's all we need it's just a very
- 00:07:41 basic first example for the rest of the
- 00:07:43 course we will use code files no worries
- 00:07:46 but here we could add a function sum up
- 00:07:49 which takes n as a input curly braces
- 00:07:52 opening and closing and between those
- 00:07:54 hit shift enter to not submit it but to
- 00:07:58 add more code in new lines and then here
- 00:08:01 we have the result which initially is 0
- 00:08:03 shift enter add a for loop where we have
- 00:08:07 I which is equal to 1 initially because
- 00:08:09 we started ones we could include 0 but
- 00:08:12 it makes no difference we want to go up
- 00:08:14 all the way as long as I is smaller or
- 00:08:18 equal to N and then we increment I curly
- 00:08:22 braces and shift enter and then in there
- 00:08:27 loop body we add I to result and then we
- 00:08:32 in the end here return result this is
- 00:08:36 the function that would solve this
- 00:08:38 problem if we had entered we now submit
- 00:08:41 this function and now it's stored here
- 00:08:43 and we can use it so for example now we
- 00:08:45 can call sum up for free and I get 6 as
- 00:08:49 a result which makes sense because free
- 00:08:52 is essentially free plus 2 plus 1 these
- 00:08:55 are the numbers leading up to free and
- 00:08:57 summed up they give us 6 and their first
- 00:09:00 sum up for 4 gives us 10 because that's
- 00:09:02 4 plus 3 plus 2 plus 1 so this is the
- 00:09:07 problem solved this is our first
- 00:09:09 algorithm but how can we now matter the
- 00:09:12 time it takes well for that I'll clear
- 00:09:17 that it's still stored in memory I just

- 00:09:19 clear the console so that we have more
- 00:09:21 space for that we can measure the time
- 00:09:24 when we start at the time when we end
- 00:09:27 and simply take the difference for that
- 00:09:30 I'll add a start variable which is 0 and
- 00:09:32 end variable which is 0 initially and
- 00:09:35 then here we can write start and in
- 00:09:38 javascript in the browser there is a
- 00:09:40 performance object on which we can call
- 00:09:43 the now method and
- 00:09:44 gives us the current timestamp now I can
- 00:09:47 set start equal to that then call Sam up
- 00:09:51 for let's say a value of five and then
- 00:09:54 end is performance dot now and with that
- 00:09:58 we can just output end minus start here
- 00:10:02 in the console you can just type it like
- 00:10:04 this and it will print the result and
- 00:10:06 this will give us the amount of time
- 00:10:09 this took to execute so if I hit enter
- 00:10:11 here this is the result I get okay let's
- 00:10:16 now repeat this for this you can press
- 00:10:18 the up arrow key and let's say we use
- 00:10:21 ten now instead of five if I hit enter
- 00:10:24 you see I get pretty much the same time
- 00:10:27 actually exactly the same time so it
- 00:10:29 looks like it takes exactly the same
- 00:10:31 time no matter what we enter here
- 00:10:33 well let's confirm this by using let's
- 00:10:36 say 1,000 as a value here if I now it
- 00:10:40 enter oh okay I get a different time
- 00:10:43 roughly three times as long as it took
- 00:10:45 before right it was zero zero five
- 00:10:48 before now it's zero one five so it's
- 00:10:51 definitely not always the same time but
- 00:10:54 it seems like there is no difference
- 00:10:55 between five and ten well that's the
- 00:10:58 problem with measuring time like this in
- 00:11:01 hard numbers if we do this we have a lot
- 00:11:04 of influencing factors that have nothing
- 00:11:07 to do with our algorithm for example and
- 00:11:10 easy to imagine the computer you are
- 00:11:13 running this on matters I'm running this
- 00:11:16 on a relatively new MacBook and of
- 00:11:18 course that's a fast device on older
- 00:11:21 devices you will see different numbers
- 00:11:23 but even on this MacBook I don't always
- 00:11:26 get the same result because it depends
- 00:11:29 on how many processes are running in the
- 00:11:31 background on the browser I use on the
- 00:11:34 open tabs in the browser and on a lot of
- 00:11:37 other things in addition the browser
- 00:11:41 does certain optimizations behind the
- 00:11:44 scenes where it caches my function and
- 00:11:46 caches values so that even for one of
- 00:11:50 the same value I will not consistently
- 00:11:53 always get the same output so measuring
- 00:11:57 time like this
- 00:11:58 is not a great idea and especially for
- 00:12:00 very small values like 5 and 10 it's
- 00:12:03 very unreliable now still let's see it
- 00:12:08 for some bigger numbers so instead of
- 00:12:09 1000 let's run it for 10,000 and now
- 00:12:13 let's run it for 100,000 and let's run
- 00:12:17 it for a million and be careful don't
- 00:12:20 enter a too large number here because
- 00:12:22 eventually it will freeze your computer
- 00:12:24 but 1 million should work and you see
- 00:12:27 we're getting results here now let's run
- 00:12:30 it for 10 million that should still work
- 00:12:34 and that will be the last number 100
- 00:12:38 million now we see a trend at least it
- 00:12:43 tends to take longer for bigger numbers
- 00:12:45 we have this strange result for 100,000
- 00:12:49 which took longer than 1 million but
- 00:12:52 let's ignore that generally what we can
- 00:12:54 roughly say is that the bigger the
- 00:12:57 number the longer it takes and also
- 00:13:00 roughly especially for the bigger
- 00:13:03 numbers we can say that for 10 million
- 00:13:08 which we have here it takes around 5
- 00:13:11 times as long as for 1 million for 100
- 00:13:15 million it takes 10 times as long as for
- 00:13:18 10 million now if I try this for a
- 00:13:22 billion by adding one more 0 it gets
- 00:13:25 stuck for quite some time but then again
- 00:13:27 it's roughly 10 times the time it took
- 00:13:30 for 100 million so it looks like 4 big
- 00:13:34 numbers we roughly have factor 10 when
- 00:13:39 it comes to the performance when we have
- 00:13:42 factor 10 here on the number I use 1
- 00:13:46 billion instead of 100 million so that's
- 00:13:48 10 times 100 million I get 10 times the
- 00:13:52 performance so this looks like the
- 00:13:54 relation it looks like if we double the
- 00:13:57 number for big numbers or we take it
- 00:14:00 times 10 the performance is also doubled
- 00:14:03 or taken times 10 and indeed this is how
- 00:14:06 we should look at it we shouldn't care
- 00:14:07 too much about the concrete numbers
- 00:14:09 because those depend a lot on

- 00:14:11 environment especially if we're working
- 00:14:14 on small input numbers but we should
- 00:14:17 care about the general trend especially
- 00:14:19 for very large numbers so for this
- 00:14:22 function if you would want to draw a
- 00:14:25 graph for this for this function it
- 00:14:27 would roughly look like this now we
- 00:14:30 solve for small numbers it was very
- 00:14:32 unreliable but for bigger numbers we saw
- 00:14:35 a linear trend and theoretically that
- 00:14:37 should be there for smaller numbers as
- 00:14:39 well there we just had too many
- 00:14:41 influence factors but generally we can
- 00:14:44 say that for this algorithm a bigger
- 00:14:46 number a bigger input number leads to
- 00:14:49 more loop iterations and hence the time
- 00:14:52 it takes to execute this function
- 00:14:54 increases and for this specific function
- 00:14:58 and therefore for this specific
- 00:14:59 algorithm we saw that it seems to
- 00:15:02 increase in a linear way especially for
- 00:15:05 the bigger numbers we could see that if
- 00:15:07 we take the number times 10 the time
- 00:15:10 function takes also is essentially 10
- 00:15:13 times the time it took before so we have
- 00:15:15 a linear growth function d R in the end
- 00:15:18 the factor by which we increase the
- 00:15:21 input so in this case n is the factor by
- 00:15:24 which the time the function takes
- 00:15:26 increases it increases in a linear way
- 00:15:30 and that's how we judge performance when
- 00:15:33 we talk about algorithms we don't care
- 00:15:35 about the concrete numbers but we care
- 00:15:38 about the trend you could say about the
- 00:15:41 function of time so that we can
- 00:15:44 basically make a statement about the
- 00:15:46 performance of an algorithm based on the
- 00:15:49 time it takes in relation to the inputs
- 00:15:51 not speaking about absolute numbers but
- 00:15:54 about this function about this trend
- 00:15:56 this line and this chart here and we
- 00:15:59 also call this time complexity so in
- 00:16:02 this case we would say this sum up
- 00:16:05 function and the algorithm in there has
- 00:16:07 a linear time complexity
- 00:16:15 now of course not all algorithms and
- 00:16:18 JavaScript take linear time we saw this
- 00:16:22 here this linear time function but we
- 00:16:25 also sometimes have algorithms that take
- 00:16:28 constant time so where the number of
- 00:16:31 inputs has no influence on the time this
- 00:16:35 algorithm takes and actually I can show
- 00:16:38 you an example for constant time based
- 00:16:41 on that example we just saw here I'm in
- 00:16:44 a new tab and now I'm going to rewrite
- 00:16:46 this sum up function in a different way
- 00:16:49 because that's typical in programming
- 00:16:52 often you have more than one way of
- 00:16:55 solving a problem I mean if we wouldn't
- 00:16:58 have alternative ways of solving a
- 00:17:00 problem we wouldn't need to measure the
- 00:17:02 performance right this entire idea of
- 00:17:06 finding the best algorithm and of
- 00:17:09 finding a way of comparing performance
- 00:17:11 of algorithms well that entire idea
- 00:17:14 would be useless and worthless if we
- 00:17:17 only had one solution per problem but of
- 00:17:20 course in programming you can be very
- 00:17:22 creative and you can find dozens or
- 00:17:25 hundreds of solutions for one and the
- 00:17:27 same problem if you want you and that's
- 00:17:29 why we need a way of comparing
- 00:17:31 performance which is exactly what we're
- 00:17:33 working on right now and here previously
- 00:17:36 we solved this sum up problem with a
- 00:17:39 loop well it turns out we can solve it
- 00:17:42 in one line with a mathematical equation
- 00:17:45 there is a pretty nice mathematical
- 00:17:48 formula the dust is for us and it looks
- 00:17:51 like this you divide n by 2 and then
- 00:17:56 multiply the result of this with 1 plus
- 00:17:59 n now this might look strange but this
- 00:18:02 is just math and this is just a math
- 00:18:06 formula which you can derive in
- 00:18:08 mathematics that gives you in the end
- 00:18:10 the sum of all the numbers up to a
- 00:18:13 number I can prove that if I save that
- 00:18:15 function if we call sum up for free I
- 00:18:18 get 6 full 4 I get 10 4 5 I get 15 and 5
- 00:18:23 and 8 is 5 plus 4 plus 3 plus 2 plus 1
- 00:18:27 so this works and this
- 00:18:29 just moth now here's one important
- 00:18:31 difference to the solution from before
- 00:18:33 here we have no loop we have one
- 00:18:37 mathematical equation now if we try to
- 00:18:42 measure this so if I now copy this
- 00:18:44 measurement code here from before for a
- 00:18:48 very large number you immediately see
- 00:18:50 this is much faster and actually it
- 00:18:53 doesn't change much no matter which

- 00:18:55 number we use we just have this
- 00:18:58 deviation based on the system we're
- 00:18:59 running on now it even took zero
- 00:19:01 milliseconds that's fast but generally
- 00:19:05 you can see no matter which number I
- 00:19:07 enter here it's super super fast and it
- 00:19:10 doesn't really change we definitely
- 00:19:13 don't have a trend as we had it before
- 00:19:15 where large numbers led to a much longer
- 00:19:19 execution time for the function now the
- 00:19:21 execution time of this function is
- 00:19:23 pretty much always the same because in
- 00:19:26 there we have no iteration we have no
- 00:19:27 loop we have just one mathematical term
- 00:19:30 which is executed no matter which value
- 00:19:33 we feed in for n
- 00:19:36 and that is such an example for constant
- 00:19:39 time the time it takes to execute the
- 00:19:41 function is always the same no matter
- 00:19:44 which value n has and this is not only
- 00:19:47 an example for constant time this also
- 00:19:50 shows us why it's important to know
- 00:19:53 about time complexity and why it's
- 00:19:56 important that you as developer are able
- 00:19:59 to figure out which time complexity of
- 00:20:02 function has because for this given
- 00:20:05 problem of solving up all numbers up to
- 00:20:07 n if you would have decided that this
- 00:20:10 loop solution is the best possible
- 00:20:13 solution you would have taken the slower
- 00:20:16 algorithm we were able to find a faster
- 00:20:20 one after all that's why measuring
- 00:20:23 performance is so important and why you
- 00:20:25 need to be able to write algorithms but
- 00:20:28 why you then also need to be able to
- 00:20:30 compare different alternatives to then
- 00:20:33 find the best one now besides linear and
- 00:20:36 constant time we also find algorithms
- 00:20:39 that have quadratic time or even cubic
- 00:20:43 time and a lot of other time
- 00:20:44 complexities as well pretty much
- 00:20:46 anything is possible here you can have
- 00:20:48 all kinds of different graphs here that
- 00:20:50 grow faster or slower but these are some
- 00:20:53 of the common ones now we have one
- 00:20:56 problem though I mentioned that we don't
- 00:21:00 care too much about the concrete numbers
- 00:21:02 we got for a good reason because they
- 00:21:04 are influenced by a lot of factors now
- 00:21:07 still we were able to identify some
- 00:21:09 trend here we were able to identify that
- 00:21:12 for the iteration solution with the loop
- 00:21:15 for big numbers we have a linear
- 00:21:18 relation between input and time it takes
- 00:21:20 to execute the function and for our
- 00:21:24 one-liner here we saw that the time
- 00:21:27 doesn't really change a lot it certainly
- 00:21:29 doesn't grow and as I mentioned that's
- 00:21:32 important we care about the trend about
- 00:21:35 the kind of function we have not about
- 00:21:37 the concrete numbers and when we talk
- 00:21:40 about the performance of an algorithm we
- 00:21:42 therefore could say hey this solution
- 00:21:45 has linear time complexity the other
- 00:21:49 solution has
- 00:21:50 constant time complexity but in
- 00:21:52 programming we use something which is
- 00:21:54 called Big O notation which simply makes
- 00:21:57 it easier for us to quickly express the
- 00:22:01 time complexity of a given algorithm to
- 00:22:04 our developers and Big O notation looks
- 00:22:07 like this for a linear time you write o
- 00:22:10 of n for constant you have oo of one
- 00:22:13 quadratic would be Oh N squared cubic
- 00:22:17 would be Oh n with an exponent of free
- 00:22:19 and you got other time complexities as
- 00:22:22 well and you will see them throughout
- 00:22:23 the course now we're using this because
- 00:22:26 it's very easy to add this as an
- 00:22:28 annotation in your code or to tell your
- 00:22:31 interviewer that this solution has a
- 00:22:34 complexity of oh of n so linear time
- 00:22:37 complexity Big O notation is a super
- 00:22:40 important concept you'll find it a lot
- 00:22:42 in the internet and it's the standard
- 00:22:44 way of comparing performance on
- 00:22:47 algorithms you derived a Big O of an
- 00:22:52 algorithm and then you know o of N is
- 00:22:55 linear o of one is constant and you know
- 00:22:57 constant is better than linear for
- 00:23:00 example so this Big O notation plays a
- 00:23:03 huge role and is super important when
- 00:23:06 you're comparing algorithms
- 00:23:12 now it's nice that we know about Big O
- 00:23:15 Big O is great and we care about the
- 00:23:17 function not about the concrete numbers
- 00:23:19 now you just learned that for such a
- 00:23:22 linear time complexity we would refer to
- 00:23:24 know of n we would say this solution has
- 00:23:28 a time complexity of oh of n which just

- 00:23:32 means it has linear time complexity but
- 00:23:35 the question is how can we derive this
- 00:23:37 because the goal is not that you learn
- 00:23:40 it by heart
- 00:23:41 the goal of this course and that's
- 00:23:43 important is not that you learn about a
- 00:23:46 couple of algorithms and you memorize
- 00:23:48 how to solve them and you memorize their
- 00:23:51 time complexities the goal of this
- 00:23:54 course is that by the end of the course
- 00:23:56 you are able to come up with algorithms
- 00:23:59 on your own and you're able to derive
- 00:24:01 the time complexity on your own and of
- 00:24:04 course you could always use an approach
- 00:24:06 like we did before for logging the start
- 00:24:09 and the end time and finding the
- 00:24:11 difference and then you could try to
- 00:24:13 find the relation now for a linear time
- 00:24:16 complexity that's fairly easy but for
- 00:24:19 our time complexities it's hard to see
- 00:24:21 that relation since you always have to
- 00:24:24 keep in mind that the time you see here
- 00:24:26 is heavily influenced by a couple of
- 00:24:28 other factors so you can't really rely
- 00:24:31 on it to be very exact or correct and
- 00:24:35 therefore you might accidentally see a
- 00:24:38 linear relationship where you just have
- 00:24:41 some distortion and you actually have a
- 00:24:43 different time complexity so using these
- 00:24:46 numbers in the browser is not how you
- 00:24:48 should derive Big O and the time
- 00:24:50 complexity the question then of course
- 00:24:52 is how should you derive it for that we
- 00:24:56 use a technique which is called
- 00:24:56 asymptotic analysis and it involves
- 00:25:00 three simple steps first of all for a
- 00:25:03 given algorithm you define its function
- 00:25:06 now here I mean the mathematical
- 00:25:09 function you would write to get a graph
- 00:25:13 as we have it on the left now of course
- 00:25:15 that gives us a new problem though how
- 00:25:18 do we define that function how do we
- 00:25:20 know which function this JavaScript
- 00:25:23 function so this code has
- 00:25:26 the mathematical equation for that would
- 00:25:29 be now one important note here I mean
- 00:25:32 the mathematical equation of its time
- 00:25:34 complexity so what would be the math
- 00:25:36 function be for a graph as we have it on
- 00:25:39 the right here which is produced by this
- 00:25:41 JavaScript function how can we translate
- 00:25:44 this javascript function into a math
- 00:25:47 function that produces such a graph
- 00:25:50 deriving the mathematical time
- 00:25:52 complexity functions for JavaScript
- 00:25:55 algorithms sounds scary
- 00:25:57 but it isn't hard all you need to do in
- 00:26:00 the end is you need to count the number
- 00:26:02 of expression executions so with that I
- 00:26:05 mean you look at your function at your
- 00:26:07 algorithm and then you look at each
- 00:26:10 expression so roughly speaking every
- 00:26:13 line of code in it and you count how
- 00:26:15 often that's getting executed now of
- 00:26:18 course by counting this you will not get
- 00:26:21 the real time in milliseconds or seconds
- 00:26:24 this function will take but as I already
- 00:26:26 mentioned earlier that's not what we're
- 00:26:28 interested in anyways it's way too
- 00:26:30 unreliable and influenced by too many
- 00:26:33 aspects instead we simply assume that
- 00:26:37 every expression in JavaScript takes
- 00:26:40 roughly the same amount of time if we
- 00:26:42 then count the number of expression
- 00:26:45 executions so of code line executions we
- 00:26:48 could say we can easily compare that
- 00:26:50 number to the number of another
- 00:26:52 alternative function that's the approach
- 00:26:55 we're going to take here now we can't
- 00:26:57 have a look at that for a couple of
- 00:26:59 scenarios
- 00:27:00 let's account for n equal to 1 n equal
- 00:27:04 to 3 n equal to 10 and then let's see if
- 00:27:07 we can detect a pattern for n equal to
- 00:27:12 anything so 2 n for that I'll add some
- 00:27:16 space here in the function because that
- 00:27:18 makes annotating it a bit easier and now
- 00:27:21 let's count and let's start with n equal
- 00:27:24 to 1 how often does this first
- 00:27:26 expression here execute where we
- 00:27:29 initialize the result variable well I'd
- 00:27:33 say for n equal to 1 this executes one
- 00:27:37 time when this function does sum
- 00:27:39 function execute this line here let
- 00:27:43 result equal zero is executed exactly
- 00:27:46 once now how often does this for loop
- 00:27:50 initialization run and important I
- 00:27:52 really mean the initialization of the
- 00:27:54 for loop not the body well for n equal
- 00:27:57 one this runs once we initialize this

- 00:28:00 for loop once now what with the code
- 00:28:03 inside of the loop how often does that
- 00:28:06 run well if n is equal to one then we
- 00:28:11 initialize I to one we run this loop or
- 00:28:14 we run through that body in the loop as
- 00:28:16 long as I is smaller or equal than one
- 00:28:19 and if n is equal to one that means we
- 00:28:23 execute the body in the loop exactly
- 00:28:25 once because after that first loop
- 00:28:28 iteration I will be incremented to 2
- 00:28:31 that of course is not smaller or equal
- 00:28:34 than n which is 1 and therefore will
- 00:28:37 leave the loop there after and now what
- 00:28:39 about this last expression this of
- 00:28:42 course also is only executed once once
- 00:28:45 we make it out of the loop we run this
- 00:28:47 line so that's the case for n equal to
- 00:28:50 one every expression in this function
- 00:28:52 runs once
- 00:28:53 now what for n equal three what about
- 00:28:57 that first expression here where we set
- 00:28:59 the result equal to zero well this still
- 00:29:03 only runs once it doesn't matter that n
- 00:29:06 is equal to three when the sum up
- 00:29:08 function runs this line only is reached
- 00:29:11 once what about the loop initialization
- 00:29:14 that also is only reached once we only
- 00:29:17 execute this line of code once yes we
- 00:29:21 will iterate through that loop a bit
- 00:29:23 more but the loop initialization if we
- 00:29:25 want to count it is only executes at
- 00:29:27 once but what about the loop body now
- 00:29:30 well this does now indeed execute three
- 00:29:34 times because I is one initially we loop
- 00:29:38 as long as I is smaller or equal than n
- 00:29:41 and n is free which means we loop once
- 00:29:45 then we increment I to two still smaller
- 00:29:48 or equal then free so we loop a second
- 00:29:51 time we increment I to
- 00:29:53 that is still smaller or equal then free
- 00:29:57 so we loop a third time and only then I
- 00:30:00 is incremented to four which means we
- 00:30:03 don't execute a loop again so that's now
- 00:30:06 a difference and that last expression is
- 00:30:08 still only executed once of course
- 00:30:10 because once we're out of the loop we
- 00:30:12 execute that line but thereafter the
- 00:30:14 function is done we don't run anything
- 00:30:16 else now for n equal to ten it's similar
- 00:30:20 the first two expressions and the return
- 00:30:23 statement are each only executed once
- 00:30:26 but now the body inside of the loop is
- 00:30:29 executed ten times and therefore you can
- 00:30:32 of course see a pattern here for this
- 00:30:34 algorithm we can say that the only
- 00:30:37 dynamic part is the body in our loop and
- 00:30:40 the times this body is executed is
- 00:30:43 exactly the same number as we feed into
- 00:30:46 this function here so therefore for n
- 00:30:49 equal n the free expressions here are
- 00:30:53 only executed once but the content in
- 00:30:55 the loop the code in the loop is
- 00:30:57 executed n times and with that we can
- 00:31:00 now derive a function if we want to
- 00:31:02 express the time complexity here with T
- 00:31:06 then we could say for the case that n is
- 00:31:09 equal to n which is of course the case
- 00:31:11 we want to consider because the user is
- 00:31:13 able to feed any number in then we could
- 00:31:16 say the number of code executions we get
- 00:31:19 for this function is 1 plus 1 plus n
- 00:31:22 plus 1 now of course we can sum this up
- 00:31:26 to 3 plus n and we could rewrite this as
- 00:31:29 3 plus 1 times n 1 x n simply because
- 00:31:32 well just n is the same as 1 times n and
- 00:31:35 you'll see in a second why I'm writing
- 00:31:37 it like this this is in the end a
- 00:31:39 mathematical function that would
- 00:31:41 describe the number of code executions
- 00:31:43 we have for this JavaScript function now
- 00:31:46 it does not describe the absolute amount
- 00:31:49 of milliseconds or a seconds this
- 00:31:51 function takes but as I said that is
- 00:31:54 hard to quantify anyways but if we
- 00:31:57 assume that every line of code execution
- 00:32:00 roughly takes the same amount of time
- 00:32:02 you could say if we assume that then of
- 00:32:06 course we can use
- 00:32:07 such a equation to compare this
- 00:32:11 algorithm to other algorithms however we
- 00:32:15 don't even need to be that exact after
- 00:32:19 all if you always count all code
- 00:32:21 executions you have a slower algorithm
- 00:32:24 just by adding an additional line of
- 00:32:27 code in there right because we would
- 00:32:28 have to count that and they offer we
- 00:32:30 would have a different mathematical
- 00:32:33 function because we added one extra line
- 00:32:35 but of course the big picture wouldn't
- 00:32:38 really be changed that's why we don't

- 00:32:39 have to be that exact because we don't
- 00:32:41 really care about the exact number of
- 00:32:43 code executions as mentioned before we
- 00:32:46 instead care about the general function
- 00:32:49 the kind of function to be precise the
- 00:32:52 trend to growth rate implied by that
- 00:32:54 function so therefore when we define
- 00:32:57 that function we don't need the exact
- 00:32:59 function with the exact numbers instead
- 00:33:02 we can generalize that and therefore we
- 00:33:05 could also write a times n plus B I
- 00:33:08 replaced the concrete numbers 1 and 3
- 00:33:11 with coefficients simply because the
- 00:33:14 concrete numbers don't matter it's the
- 00:33:16 big picture that matters to us here
- 00:33:18 which kind of function is this and this
- 00:33:20 is a linear function ok so now we got
- 00:33:23 the general mathematical function and
- 00:33:25 the worries will of course see this
- 00:33:28 process of deriving the functions on for
- 00:33:31 other algorithms in this course as well
- 00:33:33 so we got this function now now in order
- 00:33:36 to derive this Big O notation we need to
- 00:33:39 find the fastest-growing term in that
- 00:33:42 function and here it it's easy that is a
- 00:33:45 times n because B is a constant
- 00:33:47 coefficient which doesn't grow at all
- 00:33:50 it's always exactly the same number a
- 00:33:52 times n on the other hand depends on n
- 00:33:55 and for a bigger n that term is bigger
- 00:33:59 so now that we found this
- 00:34:01 fastest-growing term we remove the
- 00:34:04 coefficient from that term because again
- 00:34:06 we don't care about the exact numbers we
- 00:34:09 care about the general picture the big
- 00:34:11 picture the general kind of function
- 00:34:14 this process here is called asymptotic
- 00:34:17 analysis now we narrowed this down
- 00:34:21 basically dysfunction we can clearly see
- 00:34:24 that the time complexity depends on n
- 00:34:26 well and that n is simply what you put
- 00:34:30 between the parentheses of the Big O
- 00:34:32 notation that is why it's Big O of N so
- 00:34:36 with our newly gained knowledge let's
- 00:34:38 now use it to calculate the constant
- 00:34:41 time complexity and see how all of one
- 00:34:44 is derived so here's the sum up function
- 00:34:47 with this one liner solution with this
- 00:34:50 mathematical function and now again
- 00:34:53 let's count the number of expression
- 00:34:55 executions to find out how often that
- 00:34:58 code runs so let's look at the N equal
- 00:35:02 one case and let's evaluate that one
- 00:35:05 code expression we got here how often
- 00:35:07 does this run if n is equal to one well
- 00:35:10 it runs once
- 00:35:11 what about n equal free now you could
- 00:35:16 think this runs three times because we
- 00:35:18 use the number n in this mathematical
- 00:35:21 calculation but that is not how Java
- 00:35:24 Script and programming languages work if
- 00:35:27 you have a loop or something like that
- 00:35:29 then the code inside of the loop is of
- 00:35:32 course affected by the number of
- 00:35:33 iterations if you add two numbers let's
- 00:35:36 say 100 and 100 this does not mean that
- 00:35:39 JavaScript has to perform a certain
- 00:35:41 execution 100 or 200 times behind the
- 00:35:45 scenes and here n could simply be
- 00:35:48 replaced with free so you would have 3/2
- 00:35:52 which is 1.5 times 3 plus 1 which is 4
- 00:35:57 and this does not mean that it runs 1.5
- 00:36:00 or 4 times now it still only runs
- 00:36:03 exactly once so this expression here
- 00:36:07 runs once for n equal to 3 and the same
- 00:36:11 is true for n equal to 10 and for
- 00:36:13 whatever value you have an N and
- 00:36:16 therefore you can see if you write this
- 00:36:19 as a function T equals 1 there is no N
- 00:36:25 anywhere in this when we count the
- 00:36:28 number of executions we got no
- 00:36:31 expression that would run n times they
- 00:36:34 one expression we got here runs once so
- 00:36:39 if you would want to derive Big O here
- 00:36:41 then we would to find a function well we
- 00:36:44 did that it's T equals one and here of
- 00:36:46 course there is no more general form
- 00:36:49 because there is no N in it if we find
- 00:36:52 the fastest-growing term that's of
- 00:36:53 course one it's not growing at all but
- 00:36:56 it's the only term we got and if we want
- 00:36:58 to remove the coefficient there is
- 00:37:00 nothing to remove so T equals one is our
- 00:37:03 final time complexity function and as
- 00:37:06 before we take what's left on the right
- 00:37:09 side of the equal sign and put that
- 00:37:12 between the parenthesis of our Big O
- 00:37:14 notation hence you get o of 1 that's how
- 00:37:18 we would derive constant time now just
- 00:37:21 as before it is worth noting that it

- 00:37:23 doesn't matter if you have any extra
- 00:37:25 code lines in there right if we had
- 00:37:28 another console.log statement here of
- 00:37:30 course we would count Q executions so
- 00:37:33 our formula would be 1 plus 1 every line
- 00:37:37 runs once so we would end up with this
- 00:37:40 equation what you would do in practice
- 00:37:43 is you would simplify this so this does
- 00:37:45 not lead to o to Big O of 2 but instead
- 00:37:49 we would still write Big O of 1 because
- 00:37:52 we care about the general trend that's
- 00:37:55 why we focus on the fastest-growing term
- 00:37:58 and remove the coefficient here when we
- 00:38:00 have a linear growth function or when we
- 00:38:02 have any time complexity function
- 00:38:04 because we don't care about the exact
- 00:38:07 number of code executions and different
- 00:38:10 lines we want to see the general trend
- 00:38:12 here it's linear time complexity so that
- 00:38:15 T depends directly on n in a linear way
- 00:38:19 and here it's oh one no matter if you
- 00:38:22 add an extra line of code or not in the
- 00:38:25 end what you want to say here in the
- 00:38:26 constant time case is that the time a
- 00:38:29 function takes to execute does not
- 00:38:32 depend at all on n and constant times of
- 00:38:36 course the best possible case for an
- 00:38:38 algorithm because it means no matter
- 00:38:41 what you put into an algorithm it will
- 00:38:43 always deliver the same execution time
- 00:38:46 it has a constant
- 00:38:48 time complexity which is of course
- 00:38:50 awesome and highly efficient but I can
- 00:38:53 already tell you you will not be able to
- 00:38:55 convert every algorithm to a constant
- 00:38:59 time algorithm this specific algorithm
- 00:39:02 we had a look at here is an exception
- 00:39:05 where we have a linear and a constant
- 00:39:07 time complexity alternative for a lot of
- 00:39:10 problems you will not be able to find a
- 00:39:12 constant time complexity alternative but
- 00:39:15 we will see those algorithms throughout
- 00:39:18 the course and will then of course also
- 00:39:19 see different alternatives for those
- 00:39:21 algorithms
- 00:39:26 now just to make this really clear again
- 00:39:28 we do derive these big o-notation x' to
- 00:39:32 compare different algorithms because we
- 00:39:35 can't rely on the hard numbers when we
- 00:39:39 measure that when we measure it in
- 00:39:41 milliseconds or seconds we can't rely on
- 00:39:43 that that's what we have this Big O
- 00:39:45 notation which tells us which kind of
- 00:39:48 algorithm that is which kind of time
- 00:39:51 function it has now we learn about
- 00:39:54 constant time complexity and about
- 00:39:57 linear time complexity thus far for
- 00:39:59 constant time complexity n the number of
- 00:40:03 inputs has no effect on the time the
- 00:40:05 algorithm takes for linear time
- 00:40:08 complexity the execution time grows
- 00:40:11 guess what linearly with n so the bigger
- 00:40:15 the n the longer the execution time and
- 00:40:18 the factor should be the same so if you
- 00:40:21 put 10 times as much n into the function
- 00:40:25 it should take 10 times longer now
- 00:40:28 therefore we can clearly say that o of n
- 00:40:32 linear time complexity is worse than
- 00:40:36 constant time complexity if we have a
- 00:40:39 look at those charts we can see the same
- 00:40:41 basically for oh of 1 we always have the
- 00:40:44 same execution time no matter the inputs
- 00:40:46 for o of n that's not the case so o n is
- 00:40:50 worse than oh of 1 as I mentioned before
- 00:40:53 though you don't always have the choice
- 00:40:56 you can't always come up with a solution
- 00:40:59 that has oo of 1 as you will also see in
- 00:41:02 this course still that's how these two
- 00:41:05 time complexities are related and in
- 00:41:08 general here on this slide I will order
- 00:41:10 these algorithms from fast to slow and
- 00:41:14 therefore let's have a look at a couple
- 00:41:16 of other time complexities you might see
- 00:41:19 from time to time
- 00:41:20 for example logarithmic time complexity
- 00:41:23 which would be written like this o of
- 00:41:25 log n graph could look something like
- 00:41:29 this and we can see that here maybe
- 00:41:31 initially it's slower than linear time
- 00:41:33 complexity but with bigger ends it tends
- 00:41:37 to be faster because it doesn't grow
- 00:41:39 linearly it grows at a slower pace
- 00:41:41 instead so the execution time of the
- 00:41:44 function grows logarithmically with n 4
- 00:41:48 o N squared we speak of quadratic time
- 00:41:52 complexity graph might look something
- 00:41:54 like this and here the execution time
- 00:41:57 grows quadratically with n so the bigger
- 00:42:00 n the bigger the execution time and it
- 00:42:02 grows much faster than for linear time

- 00:42:06 complexity we have even faster growth
- 00:42:09 and therefore an even slower algorithm
- 00:42:12 with exponential time complexity so – at
- 00:42:16 the power of n here this grows even
- 00:42:19 faster than quadratic time complexity
- 00:42:21 and therefore since execution time grows
- 00:42:24 exponentially with n we have a very very
- 00:42:27 very slow algorithm which will quickly
- 00:42:30 be impossible to execute for very low
- 00:42:34 ends you will already see problems here
- 00:42:37 now this are not all possible time
- 00:42:40 complexities we have in theory any
- 00:42:43 mathematical function or construct is
- 00:42:46 possible the most common ones are
- 00:42:49 definitely linear constant and quadratic
- 00:42:52 time complexity mixed with a bit of
- 00:42:54 logarithmic time complexity and in
- 00:42:57 general as I mentioned a couple of times
- 00:42:59 already
- 00:42:59 throughout this course we'll see plenty
- 00:43:01 of examples also for different time
- 00:43:03 complexities
- 00:43:09 now however it's time for you to
- 00:43:11 practice what you learned because it's
- 00:43:14 nice if I explain all of that you really
- 00:43:16 need to understand it you really need to
- 00:43:18 become comfortable with coming up with
- 00:43:21 solutions and judging their quality
- 00:43:23 judging their time complexity this is
- 00:43:26 something you need to learn now we will
- 00:43:29 develop this throughout this course but
- 00:43:31 I already got a nice first practice for
- 00:43:33 you which you can absolutely solve with
- 00:43:36 standard JavaScript skills and what you
- 00:43:39 learned up to this point I want you to
- 00:43:42 write an algorithm so write a function
- 00:43:44 that takes an array of numbers as input
- 00:43:48 and calculates the sum of those numbers
- 00:43:52 then define the time complexity of that
- 00:43:55 algorithm you came up with and check if
- 00:43:58 there is a faster alternative so find
- 00:44:00 out what the lowest possible time
- 00:44:03 complexity so the fastest possible
- 00:44:05 algorithm is for this problem so if we
- 00:44:09 have a look at the code you should write
- 00:44:11 some function some numbers which takes
- 00:44:14 an array numbers as input and what you
- 00:44:17 can call like you see here in the second
- 00:44:20 line and you get back the response in
- 00:44:24 this case 14 for summing up those
- 00:44:26 numbers so your task is to write this
- 00:44:29 part that's inside the function and
- 00:44:31 determine the time complexity of your
- 00:44:33 solution as well as the lowest possible
- 00:44:35 time complexity you're able to come up
- 00:44:38 with now for that attached you find a
- 00:44:41 little starting project if you want to
- 00:44:43 call it like this it's a very simple
- 00:44:44 JavaScript project with a HTML file
- 00:44:47 which is empty which just imports a
- 00:44:49 JavaScript file which is also empty and
- 00:44:51 that's where you should add and call
- 00:44:54 your function outputting the result in
- 00:44:56 the console of the developer tools is
- 00:44:58 perfectly fine you don't need to do
- 00:45:00 anything else the other files you see
- 00:45:02 here are just for configuring git and
- 00:45:04 for configuring my editor for recording
- 00:45:06 so you just find the index.html and the
- 00:45:09 app.js file attached you can use this as
- 00:45:12 a starting project and add your solution
- 00:45:14 in app trace and we'll solve it together
- 00:45:17 in the next lecture
- 00:45:22 were you successful let's solve it
- 00:45:25 together now and for that I'll add a
- 00:45:27 function some numbers which takes my
- 00:45:32 numbers as parameter of course that
- 00:45:35 parameter name is up to you and I want
- 00:45:38 to call this with let's say an array
- 00:45:40 where I have 1 3 and 10 and I want to
- 00:45:45 console.log the result of the function
- 00:45:47 so the function should simply return the
- 00:45:49 sum now how could we sum this up if we
- 00:45:54 knew that the array always has exactly 3
- 00:45:58 elements we could return numbers 0 plus
- 00:46:05 numbers 1 plus numbers 2 that would be a
- 00:46:11 valid solution if we knew that we always
- 00:46:13 get exactly free elements now the
- 00:46:17 problem is that is not something we know
- 00:46:20 here when I gave you this exercise I
- 00:46:22 didn't mention that it would be
- 00:46:24 restricted to a certain length of number
- 00:46:27 arrays so you can't assume this
- 00:46:29 nonetheless if you could this would
- 00:46:32 probably be the best possible solution
- 00:46:34 because this has a time complexity of
- 00:46:37 all of one so a constant time complexity
- 00:46:41 simply because we have one expression
- 00:46:44 here and for numbers which has free
- 00:46:46 elements if we assume that it's always

- 00:46:48 fixed to three elements this still only
- 00:46:51 runs once so this here would be all of
- 00:46:54 one constant time perfect unfortunately
- 00:46:58 as I mentioned it's not something we can
- 00:46:59 assume so I'll leave it here for
- 00:47:02 reference but it's probably hopefully
- 00:47:04 not a solution you came up with
- 00:47:06 instead we don't know how long numbers
- 00:47:08 is so let's add a result variable maybe
- 00:47:12 which initially is zero and let's add a
- 00:47:15 for loop to go through all numbers we're
- 00:47:18 getting now numbers is an array here not
- 00:47:21 a single number so we can use a for off
- 00:47:24 loop to loop through that so for every
- 00:47:27 number of numbers we execute the code
- 00:47:30 inside of the loop now the code we
- 00:47:33 execute in here is simply that we add a
- 00:47:35 number to our
- 00:47:36 result and then we return result
- 00:47:38 thereafter
- 00:47:39 this could be one solution for this
- 00:47:41 problem if we now save this and we run
- 00:47:45 this in the browser I get 14 which is
- 00:47:47 the expected result now let's also try
- 00:47:51 it with a different array where I also
- 00:47:52 pass 50 as a 4th value and we should now
- 00:47:56 get 64 as a result and that's the case
- 00:48:00 so our function our algorithm seems to
- 00:48:04 work the question now of course is which
- 00:48:07 time complexity does it have and can we
- 00:48:09 do better than that
- 00:48:11 now let's derive the time complexity of
- 00:48:14 this solution and for that let's count
- 00:48:16 executions so let's count for a numbers
- 00:48:20 being equal to an array with three
- 00:48:22 elements if that's the case this first
- 00:48:27 line runs once right it's only executed
- 00:48:31 once when this some numbers function is
- 00:48:33 called it does not depend on the length
- 00:48:35 of numbers this loop here on the other
- 00:48:38 end runs more often than that it runs
- 00:48:40 three times now the loop initialization
- 00:48:43 here actually is only done once but
- 00:48:47 instead of the loop we run the code
- 00:48:48 three times because we have three
- 00:48:50 elements so our n here is the length of
- 00:48:54 this array because that's what
- 00:48:56 determines how often this code inside of
- 00:48:59 the loop executes so that's our n now
- 00:49:03 this last line still only runs once
- 00:49:05 because it does not depend on the length
- 00:49:07 of this array
- 00:49:09 now if we have a look at this case with
- 00:49:12 50 as a fourth value this first line
- 00:49:16 still only runs once this loop is only
- 00:49:18 initialized once and this line where we
- 00:49:21 return the result all the only runs once
- 00:49:23 but of course now this code inside of
- 00:49:26 the loop runs four times because we got
- 00:49:28 four elements and that is what
- 00:49:31 influences this code here and the number
- 00:49:34 of iterations we have so if you would
- 00:49:37 want to write this as a function here we
- 00:49:40 would have one plus one so this code
- 00:49:43 execution plus this code execution plus
- 00:49:46 this code execution in the last line
- 00:49:48 plus n in the end write plus four but if
- 00:49:54 we consider an infinitely long array
- 00:49:56 where we simply have a look at n
- 00:49:58 elements this would be our dynamic part
- 00:50:01 because this part here depends on the
- 00:50:03 length of the array and that is our n in
- 00:50:06 this case
- 00:50:09 so we could write this as free plus N
- 00:50:12 and as you learned that is free plus 1
- 00:50:16 times n if you want to write it like
- 00:50:17 this
- 00:50:18 so this year would be our time
- 00:50:20 complexity function now you learned that
- 00:50:23 as a first step we should then find the
- 00:50:25 fastest-growing term and that definitely
- 00:50:28 is this part here so we would simplify
- 00:50:32 to that and then we would remove the
- 00:50:34 coefficient so the 1 so we're left with
- 00:50:37 T equal n
- 00:50:40 now as a side note if you had an extra
- 00:50:42 line in there where you can't so lock
- 00:50:44 something you could of course count this
- 00:50:48 extra line and this would execute four
- 00:50:50 times as well but the only difference
- 00:50:52 would be that you in the end would write
- 00:50:53 plus n plus n right because you have two
- 00:50:57 ends now basically so you have Q n here
- 00:51:01 two times n hence you simplify this to
- 00:51:06 two times n you remove the coefficient
- 00:51:08 and you get the same result that's what
- 00:51:10 I meant earlier it does not matter if
- 00:51:12 you add an extra line it's about the
- 00:51:14 general big picture so therefore we
- 00:51:17 would write this as o of n and that of

- 00:51:20 course is linear time complexity that's
- 00:51:24 the time complexity we got here now the
- 00:51:26 question is can we do better
- 00:51:28 now generally that's of course not that
- 00:51:31 easy to answer you probably don't know
- 00:51:34 all possible solutions to all possible
- 00:51:38 problems but let's dive into the code is
- 00:51:40 there something we could improve now for
- 00:51:44 that of course we should focus on the
- 00:51:46 part of our code that depends on n
- 00:51:48 because we can't improve this first line
- 00:51:51 or this return statement it runs only
- 00:51:53 once so there's no way of improving that
- 00:51:55 but if we could get rid of that loop we
- 00:51:59 will certainly win a lot but to be
- 00:52:02 honest for this problem if we don't know
- 00:52:05 the length of the array in advance there
- 00:52:08 really isn't a way of getting rid of
- 00:52:10 that loop there really isn't a way of
- 00:52:13 shortening this here now of course you
- 00:52:16 could get rid of that for loop you could
- 00:52:18 get rid of that implementation we have
- 00:52:21 here and you could instead return
- 00:52:24 numbers reduce using the built-in reduce
- 00:52:28 method that comes with JavaScript
- 00:52:31 essentially where reduce takes a
- 00:52:34 function for example an arrow function
- 00:52:36 where you then get your sum and the
- 00:52:40 current number you're looking at and you
- 00:52:44 initialize the sum with zero with that
- 00:52:47 second argument you pass to reduce and
- 00:52:49 instead of that function you return sum
- 00:52:51 plus Kurn um
- 00:52:53 with that solution if I save that I
- 00:52:55 still get 64 as a result and now we have
- 00:52:59 a one-liner which is as we learned just
- 00:53:02 Big O of one constant time so we won
- 00:53:05 right it's better yeah not really
- 00:53:10 this is something you could think and if
- 00:53:12 that was your thought process I can't
- 00:53:15 blame you but it's not entirely correct
- 00:53:18 this year calls a new function right now
- 00:53:22 before when I showed you that magic
- 00:53:24 function for adding up all numbers we
- 00:53:27 had one mathematical equation that was n
- 00:53:30 divided by two times n plus 1 now we're
- 00:53:35 not calling a function here we have a
- 00:53:37 mathematical equation here which is
- 00:53:38 solved by JavaScript here we are calling
- 00:53:41 the reduce method and the reduce method
- 00:53:44 internally will do something similar as
- 00:53:48 we're doing here so reduce will not
- 00:53:51 really turn this into a Big O of one
- 00:53:53 solution because we make our function
- 00:53:57 shorter by relying on an a verbal in
- 00:54:00 function but that doesn't make the
- 00:54:02 built-in function faster and that's an
- 00:54:05 important takeaway just because you were
- 00:54:07 able to shrink your code down to one
- 00:54:09 line does not make it a constant time
- 00:54:12 code that's only true if in that one
- 00:54:15 line you're not calling a new function
- 00:54:17 here we are calling a new function the
- 00:54:21 reduced method so we would have to look
- 00:54:23 into that reduce method and find out
- 00:54:25 what its time complexity is to derive
- 00:54:28 the time complexity of our function here
- 00:54:30 now you could do that but in the end I
- 00:54:33 can tell you reduce will also give us
- 00:54:35 linear time complexity so this is not an
- 00:54:39 improvement
- 00:54:40 indeed this solution we had before where
- 00:54:43 we have a simple loop where we go
- 00:54:45 through all the elements that's our best
- 00:54:47 possible solution in the end that has
- 00:54:50 linear time complexity and for this
- 00:54:52 problem
- 00:54:53 we can't do better than that and that's
- 00:54:56 what I also mentioned before you will
- 00:54:58 not always be able to get a constant
- 00:55:01 time algorithm sometimes you just have
- 00:55:04 to live with linear time complex
- 00:55:06 like in this case sometimes you have to
- 00:55:08 live with the even worse time complexity
- 00:55:11 so here it's linear time complexity and
- 00:55:13 I hope I could make clear how we derive
- 00:55:17 this and why it is the best solution
- 00:55:19 here
- 00:55:24 so now we already had a quite nice
- 00:55:27 introduction into the basics of
- 00:55:29 algorithms the very important topic of
- 00:55:32 time complexity and therefore I now want
- 00:55:35 to give you an overview of what's inside
- 00:55:37 this quiz in general and what the course
- 00:55:40 goal is because this course is not an
- 00:55:44 algorithm practice book it's not full of
- 00:55:48 hundreds of algorithms through which
- 00:55:51 we're going to go but it isn't that
- 00:55:54 because I find it much more important
- 00:55:56 that you are put into the position that

- 00:55:59 you're able to solve problems on your
- 00:56:01 own you need to be able to come up with
- 00:56:04 your own solutions with your own
- 00:56:06 algorithms and this course has the goal
- 00:56:09 of giving you a foundation for that
- 00:56:10 therefore we'll of course thoroughly
- 00:56:13 look at the what and why I already gave
- 00:56:16 you an introduction to algorithms and
- 00:56:18 throughout this course we'll always come
- 00:56:20 back to this key question why are we
- 00:56:23 doing something how does something work
- 00:56:25 what is the approach here we'll see a
- 00:56:28 bunch of examples just because this
- 00:56:31 course isn't a practice book does not
- 00:56:33 mean that we're not having a lot of
- 00:56:35 examples we do but we'll see a lot of
- 00:56:37 examples on different algorithms
- 00:56:40 different kinds of algorithms and also
- 00:56:42 very important different solution
- 00:56:45 approaches so different ways of solving
- 00:56:48 algorithms but although some repetition
- 00:56:51 so that you have a chance of getting a
- 00:56:53 feeling for a certain way of solving
- 00:56:55 something so that you have a chance of
- 00:56:57 understanding how to tackle certain
- 00:56:59 problems we'll have a look at recursion
- 00:57:02 at something which is called dynamic
- 00:57:03 programming and greedy algorithms and
- 00:57:06 more the overall goal of this course is
- 00:57:10 to give you one cool thing a solid
- 00:57:13 foundation and a plan on how to tackle
- 00:57:17 and solve problems now I want to be very
- 00:57:19 clear by the end of this course you will
- 00:57:21 not be able to solve any problem that is
- 00:57:24 out there in just five seconds this is
- 00:57:26 not how it's going to work instead this
- 00:57:29 course will lay out a foundation on
- 00:57:32 which you can build up on to then
- 00:57:34 practice on your own to dive into more
- 00:57:37 algorithms and to
- 00:57:38 understand what you're doing there so
- 00:57:40 that finally after practicing and with
- 00:57:43 help of this course you are able to
- 00:57:46 solve problems to ace interviews and to
- 00:57:48 do whatever your goal is I'm going to be
- 00:57:51 very honest here because no course can
- 00:57:55 just give you that knowledge you need to
- 00:57:57 develop it on your own but for that you
- 00:57:59 need a solid foundation and that is what
- 00:58:01 this course will give you now in detail
- 00:58:03 in this course we have all those basics
- 00:58:06 and we learned about time complexity in
- 00:58:08 this first module already and that is
- 00:58:11 one of the most essential parts you need
- 00:58:14 to know to get started with algorithms
- 00:58:16 now of course getting started is nice
- 00:58:18 and we need those essentials but we also
- 00:58:21 want to apply them we want to dive into
- 00:58:23 concrete examples therefore in the next
- 00:58:25 module we'll explore math algorithms
- 00:58:28 we'll explore
- 00:58:29 algorithms that have something to do
- 00:58:31 with numbers with mathematics and where
- 00:58:34 you will already dive deeper into these
- 00:58:36 core concepts and be able to utilize
- 00:58:38 what you learned in this first module
- 00:58:40 now as a next step thereafter I'll
- 00:58:42 introduce a very important concept and
- 00:58:45 that's recursion combined with dynamic
- 00:58:48 programming you'll learn what's dynamic
- 00:58:51 programming is and why it's extremely
- 00:58:52 important when we're working with
- 00:58:55 algorithms when we're tackling problems
- 00:58:58 now with that new tool with that new
- 00:59:02 knowledge gained will explore search
- 00:59:03 algorithms and sorting algorithms two
- 00:59:06 very important categories of algorithms
- 00:59:09 which you'll need and real-life a lot
- 00:59:10 and we'll thereafter explore a different
- 00:59:13 metric
- 00:59:14 besides time complexity which I can
- 00:59:17 already say is the most important one
- 00:59:18 but besides that will then also explore
- 00:59:20 space complexity now with that and with
- 00:59:24 all that new knowledge you gained
- 00:59:25 throughout these modules will dive into
- 00:59:27 sets and array algorithms sets
- 00:59:31 essentially are arrays we could say here
- 00:59:33 and you will learn how that all works
- 00:59:36 and which tricky problems we might face
- 00:59:38 there after dead we had a lot of
- 00:59:41 practice we learned a lot of key
- 00:59:43 concepts and will therefore then dive
- 00:59:45 into more complex algorithms for example
- 00:59:47 the problem outlined at the very
- 00:59:50 beginning of this module
- 00:59:51 I'll give you a blueprint a plan for
- 00:59:55 solving problems and for deriving
- 00:59:57 algorithms now what I've written note
- 01:00:00 here though it might be tempting to just

- 01:00:02 go ahead and dive into you that last
- 01:00:05 section were just dive into the sorting
- 01:00:07 algorithms section because maybe you
- 01:00:10 just want to learn sorting algorithms
- 01:00:12 this is not how this course works at
- 01:00:14 least there might be other courses which
- 01:00:16 simply are practice after practice after
- 01:00:19 practice this course has a clear idea of
- 01:00:21 building up a strong foundation that you
- 01:00:24 have at the end of this course and
- 01:00:26 therefore I strongly recommend that you
- 01:00:28 go through the sections in order and if
- 01:00:31 you do jump ahead be prepared to face
- 01:00:34 concepts or solutions which you might
- 01:00:38 not fully understand or where you need
- 01:00:40 to do your own research so my strong
- 01:00:42 recommendation is that you go through
- 01:00:44 the sections in order and then by the
- 01:00:46 end of the course you will have that
- 01:00:48 strong foundation on which you can build
- 01:00:50 upon if you enjoyed this video you might
- 01:00:54 be interested in joining the complete
- 01:00:56 JavaScript algorithms course you find a
- 01:00:58 link below the video now in that course
- 01:01:01 will dive way deeper into algorithms
- 01:01:03 we'll have a look at math sorting and
- 01:01:05 search algorithms will dive into array
- 01:01:07 and set algorithms and we'll dive into
- 01:01:10 more complex algorithms like the Kanab
- 01:01:12 psych problem I showed at the beginning
- 01:01:13 of this video now in addition we will
- 01:01:16 not just see more examples I will also
- 01:01:19 give you the tools to solve any
- 01:01:20 algorithm or to solve any problem with
- 01:01:22 the algorithm to be precise will dive
- 01:01:25 into recursion and dynamic programming
- 01:01:26 we'll have a look at greedy algorithms
- 01:01:29 and by the end of the course you will
- 01:01:30 get a plan but you can follow a
- 01:01:32 step-by-step to find a solution for any
- 01:01:35 problem so I would love to welcome you
- 01:01:37 on board the course as I mentioned below
- 01:01:39 the video you'll find a link otherwise I
- 01:01:41 hopefully see you back in future videos
- 01:01:43 on this channel