This publish was co-written with Nick Frichette and Vijay George from Datadog.
As organizations more and more undertake Amazon Bedrock for generative AI functions, defending towards misconfigurations that might result in knowledge leaks or unauthorized mannequin entry turns into essential. The AWS Generative AI Adoption Index, which surveyed 3,739 senior IT decision-makers throughout 9 nations, revealed that 45% of organizations chosen generative AI instruments as their high price range precedence in 2025. As extra AWS and Datadog prospects speed up their adoption of AI, constructing AI safety into current processes will change into important, particularly as extra stringent laws emerge. However taking a look at AI dangers in a silo isn’t sufficient; AI dangers have to be contextualized alongside different dangers corresponding to id exposures and misconfigurations. The mix of Amazon Bedrock and Datadog’s complete safety monitoring helps organizations innovate sooner whereas sustaining strong safety controls.
Amazon Bedrock delivers enterprise-grade safety by incorporating built-in protections throughout knowledge privateness, entry controls, community safety, compliance, and accountable AI safeguards. Buyer knowledge is encrypted each in transit utilizing TLS 1.2 or above and at relaxation with AWS Key Administration Service (AWS KMS), and organizations have full management over encryption keys. Knowledge privateness is central: your enter, prompts, and outputs will not be shared with mannequin suppliers nor used to coach or enhance basis fashions (FMs). Wonderful-tuning and customizations happen on non-public copies of fashions, offering knowledge confidentiality. Entry is tightly ruled by means of AWS Id and Entry Administration (IAM) and resource-based insurance policies, supporting granular authorization for customers and roles. Amazon Bedrock integrates with AWS PrivateLink and helps digital non-public cloud (VPC) endpoints for personal, inner communication, so visitors doesn’t depart the Amazon community. The service complies with key business requirements corresponding to ISO, SOC, CSA STAR, HIPAA eligibility, GDPR, and FedRAMP Excessive, making it appropriate for regulated industries. Moreover, Amazon Bedrock consists of configurable guardrails to filter delicate or dangerous content material and promote accountable AI use. Safety is structured beneath the AWS Shared Duty Mannequin, the place AWS manages infrastructure safety and prospects are answerable for safe configurations and entry controls inside their Amazon Bedrock atmosphere.
Constructing on these strong AWS security measures, Datadog and AWS have partnered to offer a holistic view of AI infrastructure dangers, vulnerabilities, delicate knowledge publicity, and different misconfigurations. Datadog Cloud Security employs each agentless and agent-based scanning to assist organizations establish, prioritize, and remediate dangers throughout cloud assets. This integration helps AWS customers prioritize dangers based mostly on enterprise criticality, with safety findings enriched by observability knowledge, thereby enhancing their total safety posture in AI implementations.
We’re excited to announce new safety capabilities in Datadog Cloud Safety that may assist you to detect and remediate Amazon Bedrock misconfigurations earlier than they change into safety incidents. This integration helps organizations embed strong safety controls and safe their use of the highly effective capabilities of Amazon Bedrock by providing three essential benefits: holistic AI safety by integrating AI safety into your broader cloud safety technique, real-time danger detection by means of figuring out potential AI-related safety points as they emerge, and simplified compliance to assist meet evolving AI laws with pre-built detections.
AWS and Datadog: Empowering prospects to undertake AI securely
The partnership between AWS and Datadog is concentrated on serving to prospects function their cloud infrastructure securely and effectively. As organizations quickly undertake AI applied sciences, extending this partnership to incorporate Amazon Bedrock is a pure evolution. Amazon Bedrock is a completely managed service that makes high-performing FMs from main AI firms and Amazon out there by means of a unified API, making it a super start line for Datadog’s AI safety capabilities.
The choice to prioritize Amazon Bedrock integration is pushed by a number of components, together with sturdy buyer demand, complete safety wants, and the present integration basis. With over 900 integrations and a partner-built Market, Datadog’s long-standing partnership with AWS and deep integration capabilities have helped Datadog rapidly develop complete safety monitoring for Amazon Bedrock whereas utilizing their current cloud safety experience.
All through This autumn 2024, Datadog Security Research noticed rising risk actor curiosity in cloud AI environments, making this integration notably well timed. By combining the highly effective AI capabilities of AWS with Datadog’s safety experience, you may safely speed up your AI adoption whereas sustaining strong safety controls.
How Datadog Cloud Safety helps safe Amazon Bedrock assets
After including the AWS integration to your Datadog account and enabling Datadog Cloud Safety, Datadog Cloud Safety repeatedly screens your AWS atmosphere, figuring out misconfigurations, id dangers, vulnerabilities, and compliance violations. These detections use the Datadog Severity Scoring system to prioritize them based mostly on infrastructure context. The scoring considers quite a lot of variables, together with if the useful resource is in manufacturing, is publicly accessible, or has entry to delicate knowledge. This multi-layer evaluation may also help you scale back noise and focus your consideration to essentially the most essential misconfigurations by contemplating runtime conduct.
Partnering with AWS, Datadog is happy to supply detections for Datadog Cloud Safety prospects, corresponding to:
- Amazon Bedrock customized fashions mustn’t output mannequin knowledge to publicly accessible S3 buckets
- Amazon Bedrock customized fashions mustn’t prepare from publicly writable S3 buckets
- Amazon Bedrock guardrails ought to have a immediate assault filter enabled and block immediate assaults at excessive sensitivity
- Amazon Bedrock agent guardrails ought to have the delicate info filter enabled and block extremely delicate PII entities
Detect AI misconfigurations with Datadog Cloud Safety
To know how these detections may also help safe your Amazon Bedrock infrastructure, let’s take a look at a particular use case, by which Amazon Bedrock customized fashions mustn’t prepare from publicly writable Amazon Easy Storage Service (Amazon S3) buckets.
With Amazon Bedrock, you may customise AI fashions by fine-tuning on area particular knowledge. To do that, that knowledge is saved in an S3 bucket. Risk actors are always evaluating the configuration of S3 buckets, on the lookout for the potential to entry delicate knowledge and even the power to write down to S3 buckets.
If a risk actor finds an S3 bucket that was misconfigured to allow public write entry, and that very same bucket contained knowledge that was used to coach an AI mannequin, a foul actor may poison that dataset and introduce malicious conduct or output to the mannequin. This is called a data poisoning attack.
Usually, detecting all these misconfigurations requires a number of steps: one to establish the S3 bucket misconfigured with write entry, and one to establish that the bucket is being utilized by Amazon Bedrock. With Datadog Cloud Safety, this detection is one among a whole bunch which can be activated out of the box.
Within the Datadog Cloud Safety system, you may view this subject alongside surrounding infrastructure utilizing Cloud Map. This offers stay diagrams of your cloud structure, as proven within the following screenshot. AI dangers are then contextualized alongside delicate knowledge publicity, id dangers, vulnerabilities, and different misconfigurations to provide you a 360-view of dangers.
For instance, you would possibly see that your software is utilizing Anthropic’s Claude 3.7 on Amazon Bedrock and accessing coaching or immediate knowledge saved in an S3 bucket that additionally has public write entry. This might inadvertently impression mannequin integrity by introducing unapproved knowledge to the massive language mannequin (LLM), so you’ll want to replace this configuration. Although fundamental, step one for many safety initiatives is figuring out the problem. With agentless scanning, Datadog scans your AWS atmosphere at intervals between quarter-hour and a couple of hours, so customers can establish misconfigurations as they’re launched to their atmosphere. The following step is to remediate this danger. Datadog Cloud Safety gives mechanically generated remediation steering, particularly for every danger (see the next screenshot). You’re going to get a step-by-step rationalization of how you can repair every discovering. On this state of affairs, we will remediate this subject by modifying the S3 bucket’s coverage, serving to stop public write entry. You are able to do this straight in AWS, create a JIRA ticket, or use the built-in workflow automation instruments. From right here, you may apply remediation steps straight inside Datadog and ensure that the misconfiguration has been resolved.

Resolving this subject will positively impression your compliance posture, as illustrated by the posture rating in Datadog Cloud Safety, serving to groups meet inner benchmarks and regulatory requirements. Groups can even create customized frameworks or iterate on current ones for tailor-made compliance controls.

As generative AI is embraced throughout industries, the regulatory atmosphere will evolve. Datadog will proceed partnering with AWS to develop their detection library and assist safe AI adoption and compliance.
How Datadog Cloud Safety detects misconfigurations in your cloud atmosphere
You may deploy Datadog Cloud Safety both with the Datadog agent, agentlessly, or each to maximise safety protection in your cloud atmosphere. Datadog prospects can begin monitoring their AWS accounts for misconfigurations by first including the AWS integration to Datadog. This permits Datadog to crawl cloud assets in buyer AWS accounts.
Because the Datadog system finds assets, it runs by means of a catalog of a whole bunch of out-of-the-box detection guidelines towards these assets, on the lookout for misconfigurations and risk paths that adversaries can exploit.
Safe your AI infrastructure with Datadog
Misconfigurations in AI methods may be dangerous, however with the fitting instruments, you may have the visibility and context wanted to handle them. With Datadog Cloud Safety, groups achieve visibility into these dangers, detect threats early, and remediate points with confidence. As well as, Datadog has additionally launched quite a few agentic AI security measures, designed to assist groups achieve visibility into the well being and safety of essential AI workload, which incorporates new bulletins made to Datadog’s LLM observability features.
Lastly, Datadog introduced Bits AI Security Analyst alongside different Bits AI brokers at DASH. Included as a part of Cloud SIEM, Bits is an agentic AI safety analyst that automates triage for AWS CloudTrail indicators. Bits investigates every alert like a seasoned analyst: pulling in related context from throughout your Datadog atmosphere, annotating key findings, and providing a transparent suggestion on whether or not the sign is probably going benign or malicious. By accelerating triage and surfacing actual threats sooner, Bits helps scale back imply time to remediation (MTTR) and frees analysts to concentrate on vital risk looking and response initiatives. This helps throughout totally different threats, together with AI-related threats.
To be taught extra about how Datadog helps safe your AI infrastructure, see Monitor Amazon Bedrock with Datadog or take a look at our security documentation. If you happen to’re not already utilizing Datadog, you will get began with Datadog Cloud Safety with a 14-day free trial.
In regards to the Authors
Nina Chen is a Buyer Options Supervisor at AWS specializing in main software program firms to make use of the facility of the AWS Cloud to speed up their product innovation and progress. With over 4 years of expertise working within the strategic unbiased software program vendor (ISV) vertical, Nina enjoys guiding ISV companions by means of their cloud transformation journeys, serving to them optimize their cloud infrastructure, driving product innovation, and delivering distinctive buyer experiences.
Sujatha Kuppuraju is a Principal Options Architect at AWS, specializing in cloud and generative AI safety. She collaborates with software program firms’ management groups to architect safe, scalable options on AWS and information strategic product growth. Utilizing her experience in cloud structure and rising applied sciences, Sujatha helps organizations optimize choices, preserve strong safety, and convey modern merchandise to market in an evolving tech panorama.
Nick Frichette is a Workers Safety Researcher for Cloud Safety Analysis at Datadog.
Vijay George is a Product Supervisor for AI Safety at Datadog.

