Posts by Elastic (old posts, page 3)

CMMC success by design: How Elastic and MAD Security deliver compliance confidence

It’s 3:00 a.m.. Another sleepless night. The Cybersecurity Maturity Model Certification (CMMC) audit is six weeks away, and you're facing the stark reality that your cybersecurity documentation doesn't match your actual security capabilities. You're not alone — across the US federal government and Defense Industrial Base (DIB), security leaders are confronting the gap between compliance documentation and actual security resilience. What differentiates organizations that confidently pass these audits from those scrambling at the last minute?

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Elastic and Keep join forces to help users manage alerts and automate workflows

We are thrilled to announce today that Elastic has entered into an agreement to acquire Keep Alerting Ltd (“Keep”), an open source AIOps company. The company unifies alerts behind a single pane of glass and dedupes, correlates, and prioritizes events to reduce noise and automate root cause analysis. Additionally, its workflow engine automates the remediation of incidents with workflow-as-code and a no-code visual interface.

“We’re excited to join Elastic and bring Keep’s capabilities to Elastic users,” said Tal Borenstein, founder and CEO of Keep. “As open source companies, we think there are natural synergies between Keep and Elastic, our products, and our communities.”

Keep is currently available as open source, enterprise software, and as a cloud service. Headquartered in Tel Aviv, Israel, the company’s products are used by SREs, engineers, and operations teams worldwide.

Going forward, Keep will integrate with Elasticsearch and Kibana and remain open source. Context from across the Keep tech stack (alerts, incidents, service topology, change management, playbooks) will benefit from the speed and scale of Elasticsearch. Meanwhile, Elastic users will benefit from the world-class AIOps and workflow automation capabilities that Keep brings to the Elastic Stack. Customers of our Observability, Security, and Search solutions will all benefit from greater AI-powered automation, enabled by Keep.

I’m delighted to welcome the Keep team to join us on our journey to help our users accelerate business outcomes through Search AI, and we look forward to sharing further updates on our plans.

Disclaimer
The release and timing of any features or functionality described in this document remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

How can we modernize payment infrastructure at a global scale?

Financial services companies face unprecedented challenges in managing real-time payments at a global scale. SWIFT (Society for Worldwide Interbank Financial Telecommunication) facilitates international payments by transmitting standardized, secure messages between financial institutions. When a payment is initiated, the originating bank uses SWIFT to send detailed instructions to the recipient bank, including the amount and beneficiary information. While SWIFT does not transfer funds itself, it enables accurate communication for clearing and settlement through correspondent or central bank systems. For context, it's estimated that the network facilitates the movement of approximately US$5 trillion per day.

Navigating complexity and regulation with a unified data foundation

SWIFT, like the financial services companies it serves, has also seen regulation and compliance become critical factors over the last 10–15 years. For example, Payment Services Directive 3 (PSD3) and the new Instant Payments Regulation require money transfers to be processed within 10 seconds. This led SWIFT to insource more critical services like Know Your Customer (KYC), sanctions utility, and fraud control. But the expansion of services created a more interconnected and complex infrastructure.

The challenges of modernizing payment infrastructure discussed in the webinar — the need for real-time processing, enhanced data analytics, and regulatory compliance — strongly align with recent insights from Deloitte’s 2025 report “Shaping the Future of Payments,” which highlights how financial institutions must evolve as checks move toward extinction and digital payments accelerate. 

Similarly, Capgemini's
“World Payments Report” predicts that instant payments will represent 22% of all non-cash transaction volumes globally by 2028, with account-to-account payments expected to offset 15%–25% of future card transaction volume growth. Both consultancies reinforce SWIFT's observations about modernization challenges, with Deloitte emphasizing the critical need for banks to invest in advanced fraud detection systems and adopt new payment standards like ISO 20022. But here’s the problem: Only 5% of banks currently demonstrate high business and technology readiness for instant payment adoption.

Tips for choosing an AI-driven SIEM

Artificial intelligence is rewriting the rules for cybersecurity on both sides of the battle. Cloud adoption, a broadening attack surface, and AI-fueled cyber threats are driving organizations to rethink their approach to security. Discussions on the best way to adapt to a highly dynamic threat environment will naturally steer toward updating SIEM, as it is core to today’s security operations.

Legacy SIEMs weren’t designed to anticipate the scale and ferocity of today’s threat environment. Adapting cybersecurity practices need a modern platform that offers full visibility, uses advanced analytics, automates with AI, and supports flexible deployment in hybrid and multi-cloud environments. 

Here, we’ll explore some of the key considerations to account for when adopting a new AI-driven SIEM that can rise to the occasion.

Aligning SIEM to your business

SIEM is more than just a tool — it can be a strategic facilitator that empowers security teams to meet the objectives set forth by leadership. That’s why the first step in any successful SIEM selection and implementation is understanding your organization's unique risk profile, operational needs, and future priorities.

  • Know your crown jewels. Are attackers after sensitive data, IP, financials, or infrastructure? Your threat landscape should directly inform your SIEM’s capabilities.

  • Support business agility. Avoid SIEMs that box you in with rigid (and very costly) licensing, proprietary integrations, or closed architectures. Instead, prioritize platforms built to evolve with your tech stack and that play nicely with other technologies.

  • Plan for variability. Between infrastructure changes, onboarding disparate data types, balancing shifting priorities, and tackling anything else that comes your way, your SIEM should be ready to scale dynamically for your needs — without introducing excess cost or complexity.

  • Avoid vendor lock-in. Seek open SIEM solutions that offer tiered licensing, broad cloud support, and a vibrant ecosystem of integrations.

Unifying security monitoring with Elastic Security and Microsoft Sentinel

Working across the security ecosystem

At Elastic, we have always had one mission: to bring the best search and analytics capabilities to wherever our users are. This principle is built into all three of Elastic’s solutions, including Elastic Security. The AI-driven security analytics solution is built to be open, transparent, and available to users of all kinds. 

Microsoft Sentinel is a widely adopted security information and event management (SIEM) solution, including with Elastic users. As an Azure based SaaS product, it integrates seamlessly into other Microsoft products and beyond.

Elastic has complementary strengths that can bring a lot of value to security teams that use Microsoft Sentinel. Let’s dive into a few of the main ones.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt07c2c19f4d966067/6815446d478db20482757840/Logical_-case-webhook-architecture.png,Logical -case-webhook-architecture.png

Part 1: Create Azure Logic Apps 

Although it’s not complicated to build the Logic Apps, we will provide an Azure Resource Manager (ARM) template that can speed up the process.

1. Download the template here, which includes:

  • An API connector for Microsoft Sentinel

  • An Azure Logic Apps workflow to create, update, and add comments to an incident

  • An Azure Logic Apps workflow that gets the details of an incident (this separates modifications and reads.)

2. Search in the Azure console for “Deploy a custom template.”

3. Click
Build your own template in the editor. Upload the template file you downloaded.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt0711a29774e6c99d/681544b4ae96e7a827d383d8/Uploading-the-template.png,Uploading-the-template.png

4. Click Save.

5. Fill out the details for the template:

  • Subscription and resource group for the Logic Apps

  • Names for your workflows and your API Connector to Sentinel

  • The name of your Sentinel Workspace

  • The Resource Group of your Sentinel Workspace

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt216237d3fda75f3c/681544efdf716cd1e3bf9989/Configuring-template-variables.png,Configuring-template-variables.png

6. Click Review and create.

7. Click Create.

8. After your deployment is complete, navigate to the new API Connection by clicking Deployment details and then clicking on the created API Connection.

9. Within the API Connection, authorize Microsoft Sentinel. Navigate to General→Edit API connection and click Authorize. After you authorize, click Save.

10. Now that you have authorized the API connection, the logic apps can interact with Sentinel. Open both workflows in new tabs so they are available for the next part.

Part 2: Configure case management connector in Kibana

1. In Kibana, head to Stack ManagementConnectors and click Create Connector. 

2. Select Webhook - Case Management and give your connector a name, and then select Authentication = None as the authentication will be embedded in the url. Add HTTP header switch on and enter Content-Type as a key and application/json as its value.

3. On the next screen, enter the endpoint details for case creation. Copy the URL of the corresponding Logic App and paste it into the
Create case URL box (e.g., example-elastic-create-incident). You will find this inside the Request action’s details page. Navigate to Development Tools → Logic Apps designer to see the workflow.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt48fea85d563a30ec/68154522bef85944e51a7e27/Capturing-Logic-App-webhook-URL.png,Capturing-Logic-App-webhook-URL.png

In the Create case object box, build a JSON object using the variables that Kibana makes available. The keys here must match up with the JSON schema that the Logic App’s request connector is expecting, so that the values can be correctly extracted and used where needed.

{ "action": "create-incident", "case_id": {{{case.id}}}, "case_title": {{{case.title}}}, "case_severity": {{{case.severity}}}, "case_status": {{{case.status}}}, "case_description": {{{case.description}}}, "case_tags": {{{case.tags}}}, "elastic_url_base": "/app/security/cases/" }

Finally, set the Create case response external key. This tells Kibana how to extract the Microsoft Sentinel incident ID from the response returned by your Logic app.

Create case response external key:

sentinel_incident_id

4. On the next screen, configure the connector needed to get case information from Microsoft Sentinel. Here, use a new variable called external.system.id, which has been populated by the Microsoft Sentinel incident ID extracted during the previous step. Go to the Logic App created earlier that gets the incident information (e.g., example-elastic-get-incident) and retrieve the URL. Use the URL and add the external.system.id as a query parameter.Get case URL:

&sentinel_incident_id={{{external.system.id}}}

Also configure the following.
Get case response external title key:

title

External case view URL:

https://portal.azure.com/#asset/Microsoft_Azure_Security_Insights/Incident/subscriptions//resourceGroups//providers/Microsoft.OperationalInsights/workspaces//providers/Microsoft.SecurityInsights/Incidents/{{{external.system.id}}}

5. On the final screen of the connector wizard, we configure the final two endpoints for creating case updates and adding comments. Use the URL from the create case workflow and set the following parameters:
Update case method: POST
Update case object:

{ "action": "update-incident", "case_id": {{{case.id}}}, "case_title": {{{case.title}}}, "case_severity": {{{case.severity}}}, "case_status": {{{case.status}}}, "case_description": {{{case.description}}}, "case_tags": {{{case.tags}}}, "sentinel_incident_id": {{{external.system.id}}} }

Create comment method: POST

Create comment object:

{ "action": "add-comment-to-incident", "case_comment": {{{case.comment}}}, "sentinel_incident_id": {{{external.system.id}}} }

6. Kibana provides an option to test the connector by filling in dummy values. Make sure to use this and ensure everything works before rolling out for real!

Part 3: Activate the case management connector

  1. In Kibana, head to SecurityCases and click Settings. 

  2. Find the External incident management system section and select your new connector from the drop-down box. You can choose to edit the connector settings from here also if necessary. 

  3. Now, whenever you go to create a case in Kibana, the External Connector Fields section will be completed and your connector will be selected by default. 

  4. Try creating a case, updating its status, and adding comments! Then head over to Microsoft Sentinel to see all the same information already synchronized.

https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/bltf09a332bd444125f/68154626b25939336cbcf4db/Logical-alert-webhook-architecture.png,Logical-alert-webhook-architecture.png

Part 1: Create Azure Logic Apps

To replicate this architecture, we have provided an example ARM template that creates a Logic App and related Log Analytics collector to connect the two.

To deploy the template:

  1. Download our example workflow template here

  2. In Azure, search for “Deploy a custom template.”

  3. Select Build your own template in the editor.

  4. Select Load file.

  5. Choose the downloaded template.

  6. Click Save.

  7. Choose your resource group and name.

  8. Choose your name for the workflow, the :ph Analytics Collector, and the details of your Log Analytics workspace.

  9. To find your workspace ID and key, navigate to the workspace and select “Agents” and choose instructions.

  10. After review and create, navigate to Logic Apps to check the created workflow


Part 2: Configure the Elastic Alert Connector

Now that we have the Logic App workflow, we need to configure Elastic to send alerts to Logic Apps. To do that we can follow the documentation.

  1. In Kibana, navigate to Stack Management.

  2. In Connectors, create a new Webhook connector. 

  3. In Logic Apps, open your workflow, open the “Logic Apps Designer” view, click the request trigger, and copy the “HTTP URL”

  4. Fill out the Webhook connector configuration:

    • Name: Logic Apps

    • Method: POST

    • URL: <logic apps HTTP URL>

    • Authentication: none (this is embedded in the url)

    • Add HTTP Header: Content-Type =  application/json

  5. Click Save and Test.

  6. In the Test box, use a basic test body such as: {"alert":{"id": "Test"}}

  7. You should see “Test was successful.” 

  8. You can also see the test event in the Logic App by clicking on Overview and scrolling down to the Runs History table. 

You now have a working route between Elastic and your chosen Log Analytics workspace.

Part 3: Create an Elastic SIEM detection rule to test the connector

1. In Kibana, navigate to SecurityRules → Detection Rules (SIEM).

2. Click Create new rule.

3. In the first section, “Define Rule,” use the following configuration:

  • Select Custom Query.

  • Optional: You may leave the index patterns settings as default, but if you want to limit the rule to certain indices or data views, change that here. 

  • In the Custom Query box, enter the KQL query string “message:malware.” This will fire the alert when any log is found that contains the word “malware” in its message field.

  • Click Next.

4. In the second section, “About rule,” complete the following:

  • Enter a name (e.g., Sentinel Test Rule).

  • Enter a description (e.g., My Elastic detection rule to test Sentinel integration).

  • Click Next.

5. In the third section, “Schedule rule,” you can leave everything as default. 

  • Optional: If you’d like the rule to fire more frequently to make testing quicker, reduce the Runs every value from 5 minutes to 1 minute, for example.  

  • Click Next.

6. In the final section, “Rule actions,” link the rule to the connector we defined in the previous section.

  • Select the connector type “Webhook.”

  • From the Webhook connector dropdown, select your connector. (Note: If you only have one of these, skip this step as it will already be selected for you.) 

  • For Action frequency, select For each alert and Per rule run.

  • Optional: You have the option to filter the alerts that are sent to the webhook by using a query string, or a timeframe, but we can leave these options disabled for now.

  • For the Body, package up the information available from the alert into a JSON object that will be sent to the webhook. This is entirely flexible and you can include as much or as little context as you want. We have provided an example body which you can copy/paste for simplicity.

  • Click Add action.

  • Click Create & enable rule.

Part 4: Create a Microsoft Sentinel analytics rule to pick up the alert

The final piece of the puzzle is to define a query rule in Microsoft Sentinel that takes some action when Elastic alerts appear as logs in the Log Analytics stream.

1. In Microsoft Sentinel, navigate to ConfigurationAnalytics.

2. Click Create and then NRT Query Rule.

3. In the General section, complete the following:

  • Enter a name (e.g., Sentinel Test Rule).

  • Enter a description (e.g., My Sentinel analytics rule to pick up Elastic alerts).

  • Optional: You can also choose to set a Severity here and map the rule to tactics, techniques, and sub-techniques from MITRE ATT&CK.

  • Click Next.

4. In the Set rule logic section, complete the following:

  • In the Rule query box, enter a KQL string (but remember, this time we mean Kusto Query Language and not Kibana!). The simplest option if you don’t want any filtering or transformation is to enter only the table name (e.g., ElasticAlerts_CL).

Optional: In the Alert enhancement section, you have the possibility to enrich the generated Microsoft Sentinel alerts in a number of ways. If the alert data from Elastic Security contains identifiers that relate to entities recognized by Microsoft Sentinel, the Entity mapping option allows these links to be made. Parameters from the Elastic Security alert can be added as key-value pairs in the Custom details option or formatted as the name or description of the alert in the Alert details option.

  • For Event Grouping, select Trigger an alert for each event.

  • Click Next.

5. In the Incident settings section, complete the following:

  • Leave the “Create incidents from alerts triggered by this analytics rule” toggle enabled.

  • Optional: There are more advanced alert grouping options available here to help you define the incidents to be created in finer detail.

  • Click Next.

6. Skip the Automated response section and click Next.

7. Review all entered details and if all looks good, click Save to add the new rule to Sentinel. This is enabled automatically. 

And we’re done! So, from this point onward, any time our detection rule fires in Elastic Security and creates an alert, we will see corresponding alerts and incidents in Sentinel.

If you would like to test the integration, simply send a log in to Elastic Security to trigger the rule. One way to do this is by using Kibana’s Console (inside Management menu → Dev Tools). 

Enter the following into the
Shell panel and hit the play button to send the request:

POST logs-delete-me/_doc { "timestamp": "2025-03-03T10:11:11Z", "message": "malware has been installed. panic!" }Scenario 3: Ingesting Microsoft Sentinel logs into Elastic Security

Even if your main goal is to provide additional information into your Microsoft Sentinel instance, it is also worth thinking about replicating logs relating to alerts, incidents, and events from Microsoft Sentinel into Elastic Security. If your use case for Elastic involves threat hunting through diverse data or carrying out historic analysis, it is incredibly valuable to have the context from Microsoft Sentinel as part of your investigation in Elastic.

ElasticSecurity  provides an out-of-the-box integration for Sentinel that collects and parses alerts and incidents from Microsoft Sentinel REST API and events from the Azure Event Hub.

For more information, see
the documentation.