Next named exclusive Trail Blazer in NEW 2024 Radicati DLP Market Quadrant Report Read the Report
Updated: Sep 26, 2023   |   John Stringer

Next DLP Extends Visibility and Adaptive Controls for Leading Generative AI Tools

Go back

Generative AI tools are one of the new challenges that data security professionals are facing, and must address. Giving organizations a solution to this challenge, Next have announced the extension of the company’s generative AI policy templates from ChatGPT to include Hugging Face, Bard, Claude, Dall.E, Copy.Ai, Rytr, Tome and Lumen 5, within the company’s Reveal Platform. This extension of visibility and control enables customers to stop data exfiltration, expose risky behavior and educate employees around the usage of Generative AI (“GenAI”) tools. 

CISOs around the world are grappling with the proliferation of GenAI tools including text, image, video and code generators. They worry about how to manage and control their uses within the enterprise and the corresponding risk of sensitive data loss through GenAI prompts. Researchers at Next investigated activity from hundreds of companies during July 2023 to expose that:

  • 97% of companies had at least one user access ChatGPT
  • 8% of all users accessed ChatGPT
  • ChatGPT navigation events account for <0.01% of traffic. For comparison, Google navigation events consistently account for 5-10% of traffic.

“Generative AI is running rampant inside of organizations and CISOs have no visibility or protection into how employees are using these tools," said John Stringer, Head of Product, at Next DLP. "This extension of our policy templates to include top Generative AI tools is driving decision making within our customer’s environments on the risk and required security associated with their use.”

With these new policies, customers gain enhanced monitoring and protection of employees using the most popular GenAI tools on the market. From educating employees on the potential risks associated with using these services, to triggering when an employee visits the GenAI tool websites, security teams can remind and reinforce corporate data usage protocols. 

In addition, customers can set up a policy to detect the use of sensitive information such as internal project names, credit card numbers, or social security numbers in GenAI conversations, enabling organizations to take preventive measures against unauthorized data sharing. These policies are just two of many possible configurations that protect organizations whose employees are using GenAI tools. 

For more information on the Reveal Platform and how to protect intellectual property visit our page on Protecting Intellectual Property.

 

Demo

See how Next protects your employees and prevents data loss