Artificial Intelligence is an essential ally of cybersecurity. Its potential makes it a vital resource for those who defend computer systems and those who intend to attack them. As in a science fiction movie, the machines face each other, and only the best adapted will manage to emerge victoriously.
Today, we are surprised by the novelties in autonomous cars. These personal assistants respond to our conversations using the voice and the incredible deep fakes-type videos, those works of the Dartmouth Workshop in the 50s that are usually associated with the birth of current approaches in Artificial Intelligence (AI). Now, reality is stranger than fiction every few days. This fact has not gone unnoticed by the world of cybersecurity, where ‘one and the other’ (cybercriminals and those of us who are dedicated to protecting the security of companies) strive to squeeze the make the most of these affordable technologies: OpenSource approaches and the cloud approach have democratized this vast world of solutions, making them accessible to all kinds of groups.
Nowadays, it is straightforward and cheap to generate AI models that help us detect patterns and, in many cases, classify data massively and automatically (the example of Google Images is self-explanatory if we load an image and it searches for us). Similar images).
The combination of classic disinformation techniques with advanced AI models can significantly impact the opinions and attitudes of our society.
Three different approaches
This process of training AI models for cybersecurity use cases usually has three different approaches, always based on a set of starting data (dataset): supervised training (the data is already labelled, and the process is supervised so that the model understands what inputs it will have in the future and what outputs it should produce), unsupervised training (the data is not labelled and the system detects patterns and classifies autonomously with its criteria) and a mixed approach called reinforcement training, where the system gets feedback in each iteration and with it corrects its behaviour optimizing its training.
With a large family of AI techniques and methods, generally known as Machine Learning, the Deep Learning subfamily is used intensively in the cybersecurity sector both by malicious actors and by those platforms that try to help with the defense of organizations.
New features
Almost all security platform manufacturers began a few years ago to introduce new features based on AI models, especially to carry out detection tasks much more efficiently. Perhaps the most paradigmatic case is malware, where the number of variants, their level of complexity, and the number of mutations is overwhelming, and a ‘traditional’ approach based on signatures or file typology is quickly outdated. Similarly, the behaviour of users who are already present in our system can take on strange but very diverse nuances, and if we think of groups of thousands of people, manual (human) monitoring makes no sense.
In recent years, we have found various security systems that make use of one or more of these techniques for detecting patterns, behaviour classification, etc. Perhaps the most representative cases are those of EDR platforms (Endpoint Detection and Response), NDR (Network Detection and Response), UEBA (User and Entity Behavior Analytics), or those aimed at detecting DDOS (Distributed Denial-of-Service) attacks, to name the most popular. These platforms provide essential value to all types of organizations by offering ‘automatic’ services (the model will do the initial work by detecting a pattern or classifying a threat), which can launch scheduled actions or specific alerts. With this, the human teams of cybersecurity operations can be optimized,
Only by using ‘intelligent help’ can we face this epic battle with guarantees of success
The dark side
On the other hand, on the dark side of cybersecurity, such a powerful and accessible family of technologies allows any interested person with a basic level of training to reuse examples and abundant source code to create, for example, systems that generate malware that avoid detections by EDR platforms (such as MalGAN in 2017 or DeepLocker in Blackhat USA 2018) or generate specific dictionaries to carry out attacks on particular groups (PassGAN case in 2019).
Most of these tools use a type of Deep Learning model known as GANs (Generative Adversarial Networks), which have become very popular due to fake news and especially Deepfake-type videos. This type of approach (the case of an AI model writing an article by itself in 2020 using the GPT-3 platform is truly historical) can generate a real impact on societies, as we have seen in recent years with several electoral processes and with critical social revolts. The combination of classic disinformation techniques with advanced AI models of this type and bots in social networks can significantly impact the opinions and attitudes of our society, directing these massive feelings for very different purposes.
The battle between good and evil
Once again, the battle between good and evil on the internet remains constant, although now we talk about AI. In the ‘blue’ team (blue team), in charge of defending the computer systems of the companies, there will be a lot of rules to follow: decalogues of ethics and good practices, supervision by third parties, strict control of the datasets (to also avoid attacks by ‘poisoning’ the model), models that protect personal data (the GDPR would also apply in these cases, for example) and availability of reliable and neutral datasets. To all this, it will be necessary to add, when applicable, the budgetary restrictions of platforms and services in the cloud required to move all this intelligence and make it effective in the specific use case.
On the other hand, the ‘red’ team, in charge of carrying out tests and controlled attacks to detect security holes, will have complete freedom of movement and avoid quality controls and tight budgets. The end will justify the means. A legion of criminal organizations, actors with alleged sponsorships, volunteers for the cause, and apprentice “movie” hackers will launch into practice with AI models written in Python and downloadable from Github or similar. The recipe for guaranteed success. Creativity is the limit, and all kinds of labs are good places for innocent experiments with possible red team tools.
For years AI models automatically detect and respond to attacks orchestrated or based on other AI models.
The effectiveness of AI models
For years, no professional in the cybersecurity sector has raised the question of the need to use AI in all available defense and detection platforms. The battle remains challenging in terms of actor variants and methods, and only by using ‘intelligent help’ will we be able to face this epic battle with any guarantee of success.
The critical point is constant hard work on the effectiveness of the AI models we use on the platforms (tomorrow, there will be a newly compiled malware and a new intrusion technique) combined with an optimal orchestration of people, processes, and technology with automation as the backbone of our entire operation cycle. By training those models repeatedly and fine-tuning their level of accuracy, we stand a real chance of protecting our organization. The best practices of our operations will do the rest.
The threats are too varied and numerous to work any other way. The machine that faces the device is like in a science fiction movie.
Competitive advantage
In short, the family of technologies around AI offers an enormous competitive advantage to all actors in cybersecurity. Neither ‘they’ nor ‘we’ pass the opportunity to use it for our purposes. The large cybersecurity services providers have optimized our operations to be much more agile and precise in our daily work, even if the volume of data is very high. For years, AI models have automatically detected and responded to attacks orchestrated or based on other AI models. It is the fully automated scenario that has come to stay. For this reason, we must choose our service providers and associated technology manufacturers with the utmost care.
As we have seen with Artificial Intelligence in recent decades, we will likely see a similar evolution in quantum technology and its technological derivatives in the coming years. Now is the time to face the battle in cyberspace: the risks and consequences are visible in organizations worldwide.
It’s time to be prepared, choose your allies and your weapons, and develop a strategy for the battle.