What does celestial navigation have to do with DevSecOps, artificial intelligence, and machine learning?
Guest Blogger: Chris Yates, Senior Solutions Architect, Red Hat
After three straight days of riding out tumultuous, stormswept seas, a harried and grizzled sailor walks to the benighted deck of the ship. The skies are beginning to clear, and the stars shine through. Grateful for the end of the storm, the sailor peers through a sextant and begins to measure the distance from the rocking horizon to a star in the sky. A few measurements like this are taken, and the sailor returns to the dry, lit interior of the ship. Using these measurements, the sailor is able to calculate the latitude and longitude of the ship, locating their current position in the intractable, featureless expanse of the ocean.
This scene may sound like the beginnings of a historically inaccurate pirate themed novel set in centuries past, but celestial navigation was commonly used as recently as the 1980s, when inertial navigation and GPS first became widely available. Even with the advent of these new technologies, the United States Naval Academy didn’t remove celestial navigation from its curriculum until 1998, and has reinstated it as of 2015, due to the potential threat of cyber attacks to GPS satellites.
“But what does celestial navigation have to do with DevSecOps, artificial intelligence, and machine learning?”, I hear you ask. Upon first glance, not much, but if we look deeper at the process of celestial navigation, we will see many parallels between it and these two modern topics at hand. Let’s start by considering a trans-atlantic voyage. A cargo ship leaving Montreal, Canada and heading to Plymouth, UK will travel around 3,000 nautical miles. If it were to take 1 measurement at departure, and then no more, and that measurement were only 3 degrees off, the ship would be roughly 220 miles off target from its destination at the expected end of the voyage. If it were to travel at an average speed of 14 knots, it would be nearly 16 hours behind schedule. Middle school math word problems aside, we can clearly see that periodic measurements are necessary to ensure the ship arrives at its destination on time. Taking a measurement every night ensures that the ship, if it’s off by 2% every day, is only off course by at most 24 nautical miles day-to-day, and can take corrective action each consecutive day to reorient and maintain schedule and heading.
The parallel to DevSecOps here is that a waterfall design and project schedule practice is like the cargo ship taking one measurement at departure, and then committing to their heading for the remainder of the voyage. Even more challenging is that, when building custom software, we are often travelling through uncharted waters. If we are building custom software, it is because we are attempting to solve a problem that hasn’t been solved before or a solution isn’t readily available in a commercial offering. It is because of these challenges that it is imperative we adopt a culture and process adaptable to the inevitability of inaccurate estimates and that we are able to reorient and change course as necessary throughout the development lifecycle. This is why it is so important to understand and internalize the philosophy endemic to the concept of “fail fast”. In DevSecOps, the concept of “fail fast” doesn’t mean we should seek to fail, but that we should constantly be measuring our progress, and verifying our course heading is correct, to ensure not only are we able to achieve our goals, but that our goals themselves are actually achievable. As we are all aware, uncharted waters and undocumented code are often marked, “Here be dragons.”
The uncharted waters of today separate us not from the trade routes to spices, teas, and silk of the past, but from the promises of efficiency and innovation through the application of Artificial Intelligence and Machine Learning (AI/ML) in the future. AI/ML offers the Department of Defense many advantages, with use cases including predictive & preventative maintenance, intelligence analysis, threat recognition, and information security automation, to name just a few high profile use cases.
At its core, modern AI/ML development consists of human analysts collecting a body of data known as a data corpus. The contents of that corpus is then categorized by the analysts in a process known as curation. After curation, we enter the ingest phase, where the corpus is fed into the AI framework. This leads us to the final stage in generating an AI model, known as training, wherein the model builds labyrinthine connections between features of the data in the corpus. The output of all of this is a model. The model can work on new content and, applying the categorization rules it generated through training from the corpus, the model is the component that can make predictions or assertions against new input.
There are a few different levers we have available to us that can affect the quality of our model. The first is the size of our corpus. More data points lead to a more adaptable model. The second lever we have is the quality of our curation. If we can categorize and weight our curated data into finer detail categories, we can improve the ability for our model to also categorize the field data we will expect it to analyze later. Finally, we can modify the algorithms we use to train the model. The training phase itself has several dimensions of configurability that are beyond the scope of this blog post, two of which include the number of iterations through the corpus and features of the model’s internal taxonomy. Pushing or pulling on any of these levers will have a material effect on the performance of the trained model in production, sometimes counterintuitively.
Yet, as the saying goes, the only constant in the universe is change. Our adversaries will be continuously developing innovative new attack methods. The battles of tomorrow will not be won with the tactics and behaviors of yesterday. The capabilities we develop today will need to be constantly evolving to overmatch the threats we will face in the future. This means the models we train today will need to have the ability to be replaced with new models as we identify and develop new tactics to counter new threats. The process for developing new models tightly aligns with the DevSecOps methodology.
When we introduce the AI/ML model development workflow to the DevSecOps methodology, we can see that the model development workflow fits in naturally. The output we get from monitoring, false positives, false negatives, and abnormalities are important components to be curated and ingested via a feedback loop for the progressive refinement of the model. In this manner, we can improve the performance of the model through the growth of the corpus. The fielding of AI/ML models requires that the model be regularly updated through at least two forcing functions. Firstly, the field of AI/ML is rapidly evolving, and great innovations are being discovered and implemented at a rapid pace. Secondly, but perhaps primarily, the adversarial nature of how we will leverage AI/ML in the DoD dictates that our models must be constantly adapting to attain and maintain advantage.
Over the last 20 years, the rate at which data has been accumulating has been accelerating. Ten years ago, the total amount of data generated by mankind was doubling every two years. Today that rate is every 18 months. Every 18 months we generate as much data as we had in the entirety of prior history. The sheer volume of data we generate is already greater than we can manage in a timely fashion. Our ability to effectively, efficiently, and quickly gather insights and develop intelligence to make correct, effective decisions from data at this massive scale will be the differentiating factor in our ability to overmatch our adversaries. Leveraging AI/ML will permit us to offload and categorize the majority of the tedious work of sifting through unstructured data, be it natural language, imagery, or sensor data, to ensure we are able to best leverage our human resources on the analysis of important, relevant data.
The discipline of artificial intelligence is rapidly advancing. We are able to teach computers to perform tasks that we couldn’t have performed with even a supercomputer a decade ago. The capabilities we will be able to field over the next decade, through the use of AI/ML will develop quickly and cause significant disruption to the way we manage data, how we act on intelligence, and how we engage our adversaries. In order to capably respond to this future uncertainty, we must embrace practices and methodologies that are built with the resiliency to absorb these disruptive forces, and course correct efficiently. This methodology is DevSecOps, and through this methodology will the depths of these uncharted waters be plumbed.
The adoption of AI/ML, like the adoption of DevSecOps, brings changes to how technology is developed, operated, and maintained. Red Hat has been a leader in the adoption of DevSecOps across the commercial industry and the US Federal Government. As well, IBM has been a known pioneer and leader in AI/ML. If you are interested in adopting AI/ML using open source to support explainable AI read this whitepaper.
Read the first blog in this series, "Achieving Substantial Gains in IT Performance Across Government Through DevSecOps."