• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI

A.I. is helping detect cyberattacks. It needs to do more

By
Jeremy Kahn
Jeremy Kahn
and
Jonathan Vanian
Jonathan Vanian
Down Arrow Button Icon
By
Jeremy Kahn
Jeremy Kahn
and
Jonathan Vanian
Jonathan Vanian
Down Arrow Button Icon
November 16, 2021, 3:33 PM ET

Last week, popular stock trading app Robinhood revealed another huge data breach. Hackers stole five million customer names, two million customer email addresses, and a lot of more specific, valuable personal information from a smaller set of users. With these kinds of attacks becoming increasingly common, many are hoping that A.I. Can play a role in bolstering their cyber defenses.

The good news is that A.I. Is increasingly helping. Last week, at Coins2Day’s Brainstorm A.I. Conference in Boston, I moderated a panel on A.I.’s role in cybersecurity with John Roese, the global chief technology officer at Dell, and Corey Thomas, the chariman and CEO at Rapid7, which sells cybersecurity software. Both Roese and Thomas said that A.I. Is playing a key role now in helping to detect cyberattacks in most large organizations. Most of these are A.I. Systems that learn what a company’s normal network activity looks like, and then detect activity that deviates from business-as-usual.

This kind of software represents a big advance from systems that were designed to simply keep the bad guys out of the network. Firewalls alone don’t cut in today’s world, where very sophisticated hacking tools are easily available to almost anyone on the dark web. So most companies are employing A.I.-based systems in addition to firewalls to try to detect attackers who get through those defenses.

But that’s where the good news from Roese’s and Thomas’s talk kind of ended. Roese said that the problem is, the bad guys are increasingly using A.I. Too. Attackers automating the task of probing firewalls, searching for the right combination of attacks that will get through, and even using machine learning to compose more convincing phishing emails that will allow them to penetrate networks. Thomas noted that most of the A.I. Being used by cybercriminals so far isn’t particularly sophisticated. But, he said, it doesn’t have to be. Often, simple methods work well. And as Roese noted, the attackers can try a lot of different attack combinations and only have to get it right once. The defenders have to get it right every time.

Another problem, according to both Roese and Thomas, is that while A.I. Has made great inroads in detecting cyberattacks in the past few years, it is still very underutilized in preventing cyberattacks, by ensuring good cybersecurity practices are being followed, and in responding to cyberattacks once they are underway.

“Once that attack occurs and you are compromised, the speed in which you can respond today is primarily gated by human effort — which is not fast enough because the attack is definitely coming from something that’s enabled by machine intelligence, advanced automation,” Roese said.

Thomas noted that the easiest way to prevent cyberattacks is to just perform routine network maintenance, limit administrative access permissions, perform routine software updates, and regularly change passwords—all the kinds of cyber hygiene at which companies stumble. A.I. Can help automate many of these processes, but so far few businesses are using it in this way.

Likewise, once an attack has been detected, speed is essential. And yet most companies, Roese said, still depend on human cybersecurity experts to figure out how to mitigate an attack. That needs to change, he said. There are many steps that can be automatically taken to contain a hack and even push the attacker out of the network, he said. The more sophisticated A.I.-enabled cybersecurity software—such as that sold by Rapid7, Darktrace, and Vectra—has this A.I.-enabled ability. But sometimes companies are reluctant to use it, Roese says, for fear that it will be triggered during false alarms, unnecessarily shutting down essential IT functions.

“I would say that there’s still a lack of trust, both on automation and A.I., for some of the operational challenges,” Thomas said.

What’s worse, A.I. That is being used to enable other key parts of a company’s business actually represents a great way for hackers to gain entry into and attack networks. Often, the A.I. Systems have a lot of permissions to draw data and interact with other software across a network. They are, essentially, superusers, much like the human network administrators that are a favorite target of hackers. This is a great thing for an attacker, Roese said. He also said that if attackers are looking for high-value data to steal or, in the case of a ransomware attack, hold hostage, the data contained in trained A.I. Algorithms represents some of the most expensive data, on a per bit basis, of any data in an organization, he said.

Right now, too few companies are thinking about how to secure these A.I. Systems, he said.

Not to end on too much of a down note, there was some potential good news on A.I.’s application to cybersecurity last week. BT, the British telecom group, announced that its researchers had tested A.I.-enabled cybersecurity software that had been trained on epidemiological models of biological diseases. It can, according to BT, “automatically model and respond to a detected threat within an enterprise network.” The software, which BT calls Inflame, uses this model to “predict the next stages of an attack and rapidly identify the best response to prevent it from progressing any further.”

With that, here’s the rest of this week’s news in A.I. Thank you to my colleague and Coins2Day “Eye on A.I.” Co-writer Jonathan Vanian for compiling the news, talent and “Coins2Day on A.I.” Sections of the newsletter this week.

Jeremy Kahn
@jeremyakahn
[email protected]

A.I. IN THE NEWS

Robo-mania. North American sales of robotics reached a record $1.48 billion for the first nine months of 2021, topping a record of $1.47 billion set during the first nine months of 2017, according to a report by The Wall Street Journal  citing statistics from the Association for Advancing Automation  trade association. “With labor shortages throughout manufacturing, logistics and virtually every industry, companies of all sizes are increasingly turning to robotics and automation to stay productive and competitive,” trade association president Jeff Burnstein  said in a statement.

Splunk CEO waves goodbye. Data analytics and IT firm Splunk said that CEO Doug Merritt  would step down and be replaced by company chair Graham Smith. Splunk investors were concerned about the sudden CEO departure, sending the company’s shares tanking 18% after the announcement.

Behold, the giant language models. Nvidia debuted the NeMo Megatron developer tools, which companies can use to train their own language models, used to understand and react to written and spoken languages. The developer tools are based on Nvidia’s Megatron large language model, a competitor to other giant A.I. Language models like OpenAI’s  GPT-3 and Google’s  BERT software. Meanwhile, the U.K. Government said it would investigate NVIDIA’s $40 billion takeover of British semiconductor giant ARM in order to probe potential “antitrust and security issues,” according to a report by The Financial Times.

Deep learning meets weather. Google’s  A.I. Unit published a blog post detailing its research into using deep learning to predict weather more accurately. Google researchers said that deep learning provides an alternative weather forecasting method to conventional forecasting systems that rely on supercomputers and “traditional physics-based techniques” that humans must program. Google’s deep learning weather forecasting system performed better than an existing forecasting system, the company said, and points toward a future of weather prediction systems that do “not rely on hand-coding the physics of weather phenomena” but instead simply ingest weather data to make their predictions. Google subsidiary DeepMind  is also researching similar A.I.-powered weather forecasting systems.

EYE ON A.I. TALENT

Microsoft software development subsidiary GitHubchosePaige Bailey as director for data science and MLOps, which refers to machine learning operations. Bailey was previously the principal product manager of developer tools at Microsoft and a lead product manager at Google’s DeepMind  research unit.

Databook, a startup specializing in sales software, hiredBruno Fonzi  as vice president of engineering. Fonzi was previously a director of engineering at Salesforce.

The U.S. National Guard  hired Martin Akerman as its first chief data officer, reported government news publication Nextgov. Akerman was a former data strategy officer of the U.S. Air Force.

EYE ON A.I. RESEARCH

Imagining disaster in order to avoid it.  A problem with trying to use reinforcement learning, whereby an A.I. System learns from experience rather than from historical data, is that a bad decision in many real-world scenarios can be catastrophic. That's why reinforcement learning is mostly used to master video games or simulations, in which the consequences of getting it wrong aren't severe.

Now a group of researchers from China's Zheijang University and Huawei have proposed a system in which A.I. Would learn from studying examples of when people decline to pursue an action because of its danger. After mastering the challenge of predicting when humans will believe an action is unsafe, using supervised learning, the system would then continue training using reinforcement learning. During this process, the A.I. Would try to "imagine" the consequences of its actions (by projecting forward what it thinks is most likely to happen). If it determines that a human would likely block an action because it's unsafe, the technology would also block the action. The research, published in the non-peer reviewed research repository arxiv.org, could open the door to wider use of reinforcement learning in real-world situations.

FORTUNE ON A.I.

IBM debuts quantum machine it says no standard computer can match—By Jeremy Kahn

Bias in A.I. Is a big, thorny, ethical issue—By Jonathan Vanian

The U.S. Urgently needs an A.I. Bill of Rights—By Steve Ritter

How companies from FedEx to Intel are getting their A.I. Projects to the finish line—By Anne Sraders

Rivian faces a tougher road to profitability than Tesla ever did, analysts warn—By Adrian Croft

BRAIN FOOD

Polyglots vs. Bilingualism. Facebook researchers have shown that massive A.I. Systems trained to translate different languages simultaneously can also translate better between any of the language pairs in their repertoire than smaller A.I. Algorithms trained specifically for just two languages. The findings, published in the non-peer reviewed research repository arxiv.org, involved several different large A.I. Systems that learned to translate between Czech, German, Icelandic, Japanese, Russian, Chinese, and the West African language Hausa. The company found that a neural network with nearly 4 billion variables outperformed other A.I. Designs, including some that were supposed to more closely mimic how the brain works. (Neural networks in general are loosely based on the human brain, but only very loosely.)

The research is significant because it shows the extent to which large tech companies are increasingly turning to a few ultra-large A.I. Systems to form the "foundation" on which they build a host of more narrow services, as opposed to training much smaller, narrower A.I. Systems for each specific task. The fact that these large systems seem to perform better than narrow systems also has important implications for the democratization of A.I. Training and running these massive A.I. Models is expensive, meaning that only tech giants will be able to afford to build and host them, making it hard for any other businesses to avail themselves of the same capabilities unless they buy them from prominent tech companies such as Google, Microsoft, OpenAI, or Baidu. 

About the Authors
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Coins2Day, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Coins2Day’s flagship AI newsletter.

See full bioRight Arrow Button Icon
By Jonathan Vanian
LinkedIn iconTwitter icon

Jonathan Vanian is a former Coins2Day reporter. He covered business technology, cybersecurity, artificial intelligence, data privacy, and other topics.

See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.