Most programmer's will encounter an infinite loop or two at some point in their careers. Whether it is intentional or not is entirely different story and can lead to two very different situations. They can either be incredibly efficient for helping you run your business, or they can eat up resources much like a black hole would without making a peep. And that's where much of the fun lies. So a quick definition of an infinite loop before anything else.
Infinite loop: Any block of source code that will run endlessly without any logic in place to prevent it from doing so.
nothingtoseehere = maybe?;
And while they could be detrimental to your applications if not used correctly, they can also be fantastic tools that will simplify your overall logic and codebase, assuming they are designed well. So today, let's break down infinite loops. What they are, how they work, how they can bring down your system and how they can be used to run some of today's most complex software.
Infinite loops usually have a negative connotation associated with them because they have been known to cause unforeseeable side-effects when not checked. They can be hard to spot, because they just run and run without any visible end in sight. And depending on where they are running, they can cause other issues. For example, if an infinite loop were to run on your browser, then any of the following could happen:
- Browser could crash
- Browser would freeze
- Use up all memory resources
- Your hardware could crash
And if the same were to happen on a server and not a browser, then you could technically use up all of the servers resources relatively fast and bring down the server. But again, this all comes down how you are designing your infinite loops. A well crafted loop with checks and balances along the way will never have to worry about seeing no end in sight. And that comes down to the programmer determining the type of loop, the type of data being processed and the number of resources by used up by said loop.
The biggest challenge when working with infinite loops, is that you can't really define infinite. Maybe 2 trillion sounds like alot. But maybe 3 trillion is where we are trying to get to, and so what may seem like an infinite loop is just taking its sweet time. And this unknown factor is where most issues usually lie normally. And so we must be able to define infinite in a bit more granular way in order to avoid the potential issues that I mentioned above.
In some instances, we do mean infinite, it's important to say that now. If you are programming video games or operating systems, for the most part, you don't want them to stop suddenly when something happens. That would result in a bad time. You want them to run endlessly until power is shut off essentially.
But if you are processing a list of a few thousand records, then infinity is actually a bit more finite and we can further narrow down our min-max values.
Adding borders to infinite
Because of this infinite uncertainty, we have to define boundaries if we are to work with them. If we know that 1,000 iterations is too much to calculate data on 5 numbers, then we don't have to run our loops longer than necessary. And for that, most languages offer the ability to break out of loops whenever you please.
// do cool stuff here
It's also a good practice to have these type of checks in place regardless of data size, just to be sure that everything is functioning accordingly. My approach in the past when working with millions of records at any one time was to process a few tens of thousands and then to wait and ask for input to continue. This would allow me to see if whatever data was processed at that moment was indeed processed correctly.
If you are going to be working with unknown and large iterations, then you have to monitor your resources carefully. And for the most part that means not creating objects in memory for each iteration. Declaring everything within the scope of your loop iteration should be enough to clear it from the buffer on each iteration. But if not, then each programming for the most part has their own way of clearing variables and objects from memory.
This is probably the most important aspect of loop management, because it has the most real-world effect. This is where hardware can fail or behave in completely unexpected ways. This is similar to how arcade machines from the 80's had death screens when played long enough. There came a point where the hardware simply had no memory to keep going and so the results were random and chaotic at best.
Despite the resource eating uncertainty of infinite loops, they do have their place in software engineering for the mere fact that many times you won't be working with detailed and known data with a known finish point. Sometimes you'll be looking for patterns in large datasets and one iteration can very well depend on the one before it, so we have to let these algorithms run and do their thing.
But they also come in handy for the following more specified scenarios as well.
In video games
Video games for example usually have a primary looping function that runs continuously doing all of the primary calculations required to render the current scene. And when that scene ends really depends on the player. It could be 2 minutes, or you could be just standing there on a field collecting virtual flowers for hours. But theoretically, you could very well stand there for infinity, and the game would have to follow suit.
Your OS equally needs to run continuously without any set end point. Even when idle, it is still essentially running logic to determine that it is in fact idle. The only time that the loop ends is when the power gets shut off. These are more specific infinite loops however and are the basis of the software for the most part. And your OS wouldn't run, at least not well, without that logic in place.
Any logically long-running hardware
There are countless pieces of hardware currently running in various capacities around the world that must continue to run regardless of input. This includes things such as power stations and satellites for example, who for the most part have no real business being off. So they must run continuously, pending outside interference.
But again, these systems are based around the fact that they must continuously run.
For handling large data sets
Sometimes you may in fact have to process what seems like infinite data. And by that I mean data that continuously pours in and that has no definite end in sight. For example, let's say you have an application that can process 5 million records per day, but you are accumulating 6 million records per day at the same time. Very possible scenario particularly with companies that deal with large data. You would have to design your application in a way that it will indeed run infinitely processing data as it sees fit.
You can have numerous applications running 24/7 checking data queues for processing and handling requests as they come in. And the infinite nature of that application, will need to depend on the infinite nature of the operating system that it is on.
Designing for infinite
As with anything else in life, preparation is key. Anything you are consciously working with infinite loops you have to take all the parts that were mentioned into account. You'll have to determine boundaries if applicable, loop terminations, resource management and whether it in fact should run continuously without a stopping mechanism in place.
Sometimes, you are safer not going the infinite route and instead choosing the more finite approach. Working with infinite iterations is indeed sometimes odd and each time causes some form of anxiety as you carefully write each line of code, adding small checks along the way. It's definitely funner than the more traditional route for many reasons. But mainly that once your application is up and running and can't be stopped, it's almost in a sense alive. It doesn't know when it is going to end, and neither do you. It's going to calculate some data, eat some power, choose left or right and one day it will just stop.