Table of contents
- Introduction
- The danger of the “next, next” mindset
- Publicly accessible by default
- APIs everywhere!
- Weak authentication by default
- Detecting advanced threats and anomalies
- What could this mean in the real world?
- Lessons learned
Introduction
Integration platforms connect systems, applications, and data to keep businesses running smoothly. Microsoft Azure Integration Services (AIS) offers tools that let organizations build their own integration platforms instead of relying on direct integrations tied to specific systems. An integration platform communicates with systems using APIs, file transfers like SFTP, or tools like Azure Data Factory. Since the integration platform handles all all the communication between your systems, acting as a sort of interpreter and intermediator, it enables integrations between systems without them having to speak the same language or even know about each other. By transforming data into a standard format, it becomes easier to replace systems, like an ERP or CRM, with minimal disruption.
Securing the platform is crucial. AIS components like Logic Apps, Function Apps, Service Bus, and API Management provide powerful features, but without a focus on security, they can introduce risks. A security-first mindset ensures your platform stays protected.
The danger of the “next, next” mindset
A big and unfortunately common risk in Azure, or any cloud environment, is the “next, next” mindset. This happens when resources are set up quickly using default configurations without considering the security implications. It often occurs when speed is prioritized over quality or due to a lack of knowledge. While Azure makes deploying resources straightforward, this convenience can lead to serious oversights if settings aren’t reviewed carefully.
It reminds me of when I was a kid, installing games and software on the family computer. I’d click “yes” and “next” on every prompt without a second thought, eager to play whatever game I’d just discovered. One day, my mom noticed that the computer was running painfully slow. When she looked closer, she found all kinds of bloatware installed, toolbars, trial programs, things I didn’t even realize I had agreed to install. She told me, “You can’t just click through everything without reading what you’re actually installing.” That lesson stuck with me, and the same principle applies when setting up resources in Azure.
Just like blindly clicking “yes” while installing software, relying on default settings in Azure can lead to unintended consequences.
Publicly accessible by default
A common issue in Azure is that many resources with endpoints, such as Logic Apps, Function Apps, or APIs, are publicly accessible by default. If no additional access controls are put in place, these endpoints can be accessed over the internet by anyone.
For example:
- A Function App might let anyone with the correct key execute its code.
- A Logic App might be triggered by anyone who knows its URL and has access to a static key.
- A Storage Account might permit unauthorized file uploads or downloads if shared access signatures (SAS) are mismanaged.
Failing to secure these resources makes them prone to unauthorized access and exploitation.
APIs everywhere!
One of the challenges in Azure is the sheer number of APIs created by default when resources are deployed. Whether it’s a Logic App with an HTTP trigger, a Function App exposed via an endpoint, or even a Storage Account offering REST APIs, many resources come with their own callable endpoints right out of the box.
The problem is that these APIs can be called directly, bypassing centralized control through Azure API Management. Centralization provides consistent control over authentication, ensuring all APIs enforce standards like OAuth 2.0 or mTLS. It allows you to apply rate limiting, monitor usage, and detect anomalies across all endpoints from a single place. Centralizing APIs also simplifies governance by standardizing policies like input validation, rate limiting, and authentication enforcement, making your environment more secure and manageable. Without this, APIs remain scattered and harder to control effectively.
Weak authentication by default
Static keys, often generated automatically when you create a resource, are another major risk. These keys are simple strings that often provide full access to a resource. If someone gains access to a static key, they can interact with the resource without needing further authentication. It is not possible to implement proper security and authorization based on the identity of the one consuming the API. Static keys combined with public access could result in a really bad breach.
Here are some of the main issues with static keys:
- Static keys usually allow unrestricted access to a resource, meaning an attacker could trigger workflows, interact with APIs, or transfer files.
- Keys are often embedded in code, shared via email, or logged during troubleshooting. Once exposed, they’re difficult to contain.
- Changing static keys requires updating every system that relies on them. This can lead to downtime or errors, so many teams delay key rotation, increasing risk.
Detecting advanced threats and anomalies
Advanced threats and unusual activity in your Azure resources are hard to detect without Microsoft Defender for Cloud or another third party product. It provides threat detection and alerts for suspicious behavior, but it’s not enabled by default and you need to turn it on for each resource, like API Management or Storage Accounts.
Even with Defender enabled, someone has to monitor the alerts. If no one is watching, important warnings about attacks or anomalies can be missed. Setting up Defender and having a plan to handle alerts is key to keeping your environment secure.
What could this mean in the real world?
Consider a Logic App that processes purchase orders and sends information to a storage account or a databse. If its static key is leaked, an attacker could flood the app with fake requests, disrupting operations and overwhelming downstream systems. Worse, they might manipulate the workflow by altering the content of legitimate data, compromising the integrity of the data or leaking sensetive information. Similarly, a leaked storage account key or database connection string could give an attacker free access to download sensitive files, delete critical data, or upload malicious content.
This often happens when deployments are rushed, or default configurations are assumed to be secure. Without proper planning and review, the “next, next” mindset can introduce risks that are not only significant but also challenging to resolve later.
Lessons learned
The key takeaway is that default configurations in Azure, while convenient, are not inherently secure. A “next, next” mindset can leave your integration platform exposed to significant risks, from unauthorized access to compromised data. Security must be an active part of your deployment process.
Call to action!
- Take time to audit your existing Azure resources. Ensure endpoints are not publicly accessible unless absolutely necessary, and replace static keys with secure alternatives like Managed Identities.
- Use Azure API Management to enforce consistent authentication, apply policies, and gain visibility into all API traffic.
- Incorporate security into your resource deployment workflows. Automate secure configurations with tools like Bicep, and ensure your templates meet the standard you require.
- Stay updated on best practices for securing Azure resources. Microsoft documentation and the Azure Well-Architected Framework are great places to start.