Get A Quote

Artificial intelligence (AI) is no longer a thing of sci-fi books. It is increasingly part of our daily lives. The size of the AI market shows how embedded this technology is becoming. In 2019, the market value for AI products was just over $27 billion. By 2027, it is predicted to be a market worth almost $267 billion. AI, and the various flavors of AI such as machine learning (ML) and deep learning (DL), are finding their way into products and services from chemistry to policing to medicine to chatbots.

Any intrinsic technology brings with it both good and bad. AI is no different. Security is one area where AI-enabled products and services are seeing a mixed response. Privacy is another which we will explore with our next blog. In this article, we will discuss some of the ways in which AI is empowering and challenging the cybersecurity of the enterprise.

Putting artificial intelligence to good use for cybersecurity

The generation of data has become almost impossible to keep up with Artificial Intelligence (AI). In terms of the detection of cyber-threats and criminal activity, analysts struggle with the sheer volumes of traffic. This is where AI excels. AI is designed to work best in situations where there are massive amounts of data.

Detection of cyberthreats

A lack of cyber skills in industry, hyper-connected IT systems, remote employees, and increasingly sophisticated cyber-attacks are creating a perfect storm in threat detection. The 2018 Threat Hunting Report”, found that 52% of Security Operations Centers (SOC) have experienced a doubling of cyber-threats over previous years. The study concluded that to counterbalance this massive threat overload, 82% of SOCs are moving towards ‘advanced’ threat hunting techniques that use artificial intelligence (AI).

AI-based cyber-threat detection uses a type of machine learning, called ‘unsupervised learning’.  This technique is trained to find patterns using input data. Once the patterns are learned, anomalies outside of the pattern range can be spotted and used to identify unusual behavior such as the illegitimate movement of files. A popular application of this type of technique is User and Entity Behavioral Analysis or UEBA. Solutions that offer UEBA use machine learning to check network events to spot unusual patterns of behavior as humans, devices, and networks interact. One of the key differentiators of UEBA is that it does not need predefined patterns or hard-coded rules; instead, the ML algorithm learns using real-time data. Over time, accuracy and predictability improve.

One of the many aspects of AI-enabled cybersecurity solutions is the ability to send out more accurate and focused alerts when anomalous behavior is spotted. This reduces false positives and helps avoid security analyst fatigue.

Financial fraud

As more consumers shop online, card not present (CNP) fraud is seeing an uptick in activity. Mix with this, the increasing issue of synthetic identity and identity theft and a perfect storm is brewing in the financial sector.

Accordingly, financial losses due to fraud are predicted to reach $200 billion by 2024, according to analyst firm Juniper Research. Transaction volumes are hitting peak rates: Research from ACI Worldwide and Global Data shows increases in payment transactions by over 23% to 2024.

The Juniper report expects that in response to increasing financial fraud caused by a complicated financial landscape, the use of machine learning-based solutions will be increasingly used. AI and its subset ML can learn about transactions and predict anomalous behavior. In fact, the more data the better. Increasing volumes of transactions will help to train the algorithms behind the AI-enabled services and platforms used to identify fraudulent activity. Also, AI-based Transaction Monitoring Systems (TMS) are much better at reducing false positives, which impact customer experience and cause operator fatigue; more traditional TMS often have 90%+ false-positive rates during AML (anti-money laundering) checks.

The misuse of AI for cybercriminal activity

AI is an incredibly useful tool when there is too much data for human operators to work quickly and accurately. Being able to pick out a needle in a haystack is the forte of AI-enable cybersecurity tools, leaving human operators to do more interesting post-alert analysis. But the same capability can also be put to malicious use. In 2018, a report, “The Malicious Use of Artificial Intelligence” was published. The report highlighted three cyber-attack profile areas that AI will change:

  1. Expansion of existing threats. AI can enable larger, faster, and wider attacks using AI-enabled techniques.
  2. New attacks. AI systems could be used maliciously to complete tasks impractical for humans. In addition, malicious actors could exploit vulnerabilities in AI-assisted cybersecurity platforms.
  3. Change to the typical character of threats. The report concluded by stating that the researchers expect AI-assisted attacks to be “especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems.”

Here are just two examples of how AI can be used for malicious purposes.

AI, Deepfakes, and Business Email Compromise (BEC)

In 2019, a BEC attack was carried out against a British CEO; the result was a loss of $240,000. The attack was carried out using a voice made to sound like the company’s head. Analysts suspect this was done using deep fake technology. Deep fakes are now common. In 2018, 14,698 deepfake online videos were found during a research trawl.  The technology uses an AI subset known as “deep learning” DL. This AI technique is based on neural networks, which are (in some ways) similar to the way human brains process information. DL is used to manipulate real video and real voice. Some very famous examples of deep fake videos include the Obama speech from BuzzFeed. Deepfake technology offers cybercriminals a perfect tool in their social engineering armory. Imagine receiving an email that contains a fake video of yourself in an embarrassing or even illegal situation? This adds a new dimension to existing cyber-attack methods such as ransomware, phishing, and sextortion.

AI-assisted cyber-attacks

A demonstration from IBM in 2018, showed how AI can be used to make ransomware attacks even more successful. In the presentation at Blackhat, the DeepLocker research team discussed how AI could be used to aid a ransomware attack. The AI is embedded in the malware. The technique uses a process called “AI-Locksmithing” the algorithm using similar capabilities used in threat detection, i.e., spotting trends and patterns before making a decision. In the case of AI-enabled malware, this decision is when to execute the attack or remain in stealth mode.

A recent example of an AI-assisted cyber-attack was the data breach at TaskRabbit where 3.75 million customers had financial and personal data stolen. Analysts believe that an AI-enabled botnet was used. The botnet slave machines were used to execute a DDoS attack on TaskRabbit’s servers.

Nothing artificial about cybersecurity

Cybersecurity is benefitting from the use of AI, and its subsets, ML, and DL, to stem the flow of cyber-attacks. But as we have seen, time and again, cybercriminals are extremely versatile and innovative. AI not only enables swift and accurate analysis of data for good purposes, but it also allows data to be used for malicious reasons too. The arms race between the enterprise and the cybercriminal looks set to continue; this time, however, the good guys have many strings to their bow to use in the fight against cybercrime.

phone-handsetcrossmenu