How to Detect Malicious Azure Persistence Through Automation Account Abuse
Get link
Facebook
X
Pinterest
Email
Other Apps
There are many ways an attacker can maintain persistence and create ‘backdoors’ in Azure allowing them re-entry back into the environment. Persistence is important to an attacker if compromised accounts have been discovered and removed by the victim organisation as the attackers still need to find a way to re-gain access to the environment.
Installation of a webhook to interact with malicious runbooks created through automation accounts is one way an attacker can re-gain access into a tenant if compromised account access has been revoked. I was inspired to write this blog post about how to detect this technique when I came across an excellent post written by Karl Fosaaen detailing how an attacker can abuse automation accounts to maintain persistence. I have broken down this blog post into two sections covering both the detection methodology and the attack flow. For a more detailed attack flow, I urge you to take a read Karl’s blog as I took what he detailed in his post and recreated his attack to figure out the detection methods.
In short, once an attacker is kicked out of a tenant, they can trigger their runbook to run, granting them access back into this tenant. This doesn’t have to be via the creation of a new user account as depicted in the clip below – it can also be any other action i.e. the remote execution of commands in VMs or addition of a key/certificate to service principles.
If you’re interested in the detection of other Azure persistence / backdoor methods I’ve previously covered three of those techniques in other blog posts listed below:
If you’re curious about attacks on Azure Active Directory (AAD) or M365, you can check out my attack matrix here.
High-Level Overview of the Attack
Automation accounts in Azure are services that allow automation of tasks through “runbooks” (similar to a job or scheduled task) which are basically PowerShell scripts that you can import and run. Several organisations I have come across have automated PowerShell scripts that run within their on-premise environment to perform various tasks – automation accounts and runbooks follow that similar concept for the Azure environment. As such, the attack concept of abusing automation accounts and runbooks is in a similar vein to the popular attack of installing malicious services or scheduled tasks.
Once an attacker has sufficient privileges within an environment – they will be looking to establish persistence to maintain a foothold. The attackers can use an existing automation account or create a malicious automation account to register a runbook to run PowerShell scripts to conduct malicious activities such as (and not limited to):
Creation of a new user account to allow the attacker back into the network (the scenario we will be using as per Karl’s blog)
Execution of malicious scripts on VMs
Adding certificates / key to the automation account for single-factor access into AAD
Whatever else the attacker wants to run in PowerShell based on the privileges granted to the automation account
If the victim severs the attacker’s access to the tenant by revoking all compromised user credentials, the attacker can call a webhook to trigger their malicious runbook to run the malicious script again – leading to re-entry back into the environment.
Detection Methodology
To detect the abuse of automation accounts and malicious run books, there are a few things to consider as this attack can be leveraged by an attacker in many ways depending on their objectives:
Abuse of an existing automation account i.e., assigning further privileges or permissions
Creation of a new automation account for malicious purposes
Editing of an existing runbook and to insert malicious commands
Creation of a new runbook that has malicious commands
Detection of actions / commands issued within these runbooks (this can vary depending on what the threat actor is aiming to achieve)
As such, I’ve broken down the detection into a few areas to review:
Review webhook creations for signs of potentially malicious webhooks a threat actor can interact with
Review webhook requests for malicious requests
Review all automation account creations for signs of malicious activity
Review and assess permissions assigned to automation accounts
Review all runbooks that are modified / created for signs of malicious activities
Review malicious logons and other details in audit logs that may show other suspicious activities
The log sources that will help you in the identification of this include the Automation Account Activity Log, Subscription Activity Log, Resource Activity Log, Runbook Activity Log, Sign-in Logs, UAL and the Azure Active Directory Audit Logs.
Step 1: Review webhook creations
If compromised accounts have been severed by the target organisation, the attacker can call a webhook to interact with their malicious runbooks to re-gain access into the target environment. For this attack path to exist, the attacker needs to register a webhook that they can use to interact with their malicious runbook. Each runbook lists the webhooks that exist within the portal. These can be tracked in the activity log for the automation account and ALSO in the subscription activity logs.
Look for the following in the automation account and subscription activity logs:
Operation name: Create or Update an Azure Automation webhook
Operation name: Generate a URI for an Azure automation Webhook
As you can see in the screenshot below – I have indeed created a malicious webhook that I named evilWebhook. You would obviously not call your webhook such an obvious name :)
Step 2: Review malicious webhook requests
No authentication needs to occur for an attacker / user to call or interact with a webhook. As such organisations should take care not to “publicise” their webhooks as this allows direct interaction with the runbook it’s attached to. Webhook calls can be seen within the runbook input logs – as you can see here a request was made to this webhook to create a malicious user called “inverseEvil” with the password “Password123”. I leveraged this runbook published by Karl.
Step 3: Review all automation account creations for signs of malicious activity
Please note here an attacker can use an existing automation account to conduct these actions, or they can create their own malicious one. For the creation of an automation account, these are logged under the Azure Directory Audit Logs as pictured in the highlighted line below. I would hunt for the following:
Category: ApplicationManagement
Activity: Add service principal
Initiated by: Managed Service Identity
Further details relating to this creation can be seen here where you can also grep through the property of “ManagedIdentityResourceId” for “automationAccounts”.
Step 4: Review and assess permissions assigned to automation accounts
In order for the attacker runbook to perform the tasks that the attacker wants i.e. creation of a new user account that the attacker can leverage, adding of a key/certificate to an existing service principle account, execution of commands on virtual machines – the attacker needs to ensure the automation account running the runbook has sufficient permissions. The attacker can always leverage an existing automation account with these permissions, or the attacker can assign these permissions to the automation account they’ve created or are maliciously using. The logic here is to make sure that you are constantly monitoring role assignments and modifications for potentially sensitive roles being added or assigned to service principles / user accounts. For this example, I following Karl’s blog to use the runbook maliciously to create a new user account that the attacker can then log into the tenant with. Activities pertaining to role assignments can be tracked and managed by reviewing the Azure Audit Logs for:
Category: RoleManagement
Activity: Add member to role
Property Role.DisplayName for sensitive roles i.e. User Administrator, Cloud Administrator, Virtual Machine Contributor, Global Administrator etc
Step 5: Review all runbooks that are modified / created for signs of malicious activities
In order for this attack flow to work, malicious runbooks need to be uploaded or an existing runbook needs to be modified to run malicious PowerShell. This can be tracked in SO many logs sources, this by far generates events almost everywhere. It can be seen in the Subscription Activity Logs, Resource Activity Logs, Runbook Activity Logs and the Automation Account Activity Logs. Just for the sake of example, I am showing the following from the runbook activity logs. Look for the following operation throughout these log sources:
Operation: Create of Update an Azure Automation Runbook
Operation: Publish an Azure automation runbook draft
Operation: Write an Azure Automation runbook draft
Filter: Runbook name (look for a runbook name outside the norm).
Typically Azure runbooks follow the naming convention of AzureAutomation<Something>. I would use this opportunity to do a hunt for runbooks or names that break convention or build a baseline of “known good” runbooks and hunt for malicious additions. Also note here, existing legitimate runbooks may be edited to have additional PowerShell appended to the bottom – which is why this isn’t a fail safe mechanism for detection.
The full list of runbooks can be seen in the following path within the Automation Accounts > Account > Runbooks directory.
Step 6: Review malicious logons and other details in audit logs that may show other suspicious activities
I left this one to last because this final detection depends on what the runbook does. In the case of this runbook as per Karl’s blog – a new user is created to allow the attacker access back into the environment. This is relatively easy to hunt for as this is logged in the Azure Active Directory Audit logs as a new user creation made by an automation account:
•Activity: Add User
•Initiated By: managed Service Identity
•Activity Type: Add Service Principal
•User Agent: Swagger-Codegen*
The malicious logon activity can be seen in the following location in the Azure Sign-in Logs but also in the unified audit logs :)
Attack Methodology
As mentioned previously, I followed the attack methodology detailed by Karl in his blog. For a more detailed account of the attack (as I’m summarising some of the steps here), please refer to his blog. For the sake of this blog post I wanted to post the correlating actions I ran so you can use that to refer to the detection methodology I outlined.
Step 1: Creation of an Automation Account
The attacker needs to either leverage an EXISTING automation account or create a new one to conduct this activity. This can be done within the portal by accessing the “Automation Account” service. The automation account needs to be linked to an existing subscription and resource group.
Step 2: Import or modify existing runbook and publish the runbook
If the attacker created their own automation account, a runbook needs to be created to run whatever PowerShell commands the attacker wishes to run. If the attacker takes over an existing automation account with existing runbooks, those runbooks can be altered / modified by the attacker. In this instance, I imported a runbook made by Karl which has the goal of creating a new user account for the attacker if the attacker accounts have been discovered and removed by the target organisation.
Once the runbook is imported or modified, the attacker will need to manually publish the runbook by going into the runbook and hitting “publish”.
Step 3: Assign the necessary roles to the automation account
For the sake of this exercise – as our goal is to allow our runbook to create a new account that the attacker can use to access the tenant if they are locked out – the role of “User Administrator” needs to be assigned to the automation account. The roles that you choose to assign will depend on what role your runbook needs in order to complete whatever malicious operations you want your runbook to complete. Role assignments can be managed in the Azure Active Directory Roles and Assignments section.
Step 4: Create the webhook
If the attacker is locked out of all accounts and doesn’t have a way to log back into the environment – the webhook is crucial as it allows the attacker to interact with their runbook which will allow them to regain access into the environment depending on what their runbook does. The webhook is created within the specific runbook it belongs to:
The URL that it shows will only be shown on this page, there is no other way to “see” the URL again which is why it’s important to take note of this URL when you create it.
Step 5: Complete the attack!
Once the target organisation detects you and revokes all your accounts, you can trigger the webhook and create a new account to access the tenant again
Most threat actors during ransomware incidents utilise some type of remote access tools - one of them being AnyDesk. This is a free remote access tool that threat actors download onto hosts to access them easily and also for bidirectional file transfer. There are two locations for where AnyDesk logs are stored on the Windows file system: %programdata%\AnyDesk\ad_svc.trace %appdata%\Anydesk\ad.trace The AnyDesk logs can be found under the appdata located within each users' directory where the tool has been installed. Forensic analysis of these logs reveal interesting pieces of information inside the "ad.trace" log: Remote IP where the actor connected from File transfer activity Locating the Remote IP Connecting to AnyDesk Inside the "ad.trace" log you can grep for the following term "External address" and this should reveal the following line pasted below. I have redacted the IP for privacy's sake: info 2021-02-04 23:25:10.500 lsvc 9988 ...
So you want to reverse and patch an iOS application? I got you >_< If you’ve missed the blogs in the series, check them out below ^_^ Part 1: How to Reverse Engineer and Patch an iOS Application for Beginners Part 2: Guide to Reversing and Exploiting iOS binaries: ARM64 ROP Chains Part 3: Heap Overflows on iOS ARM64: Heap Spraying, Use-After-Free This blog is focused on reversing an iOS application I built for the purpose of showing beginners how to reverse and patch an iOS app. No fancy tools are required (IDA O.o), it's just you, me & a debugger <3 The app is a simple, unencrypted Objective-C application that just takes in a password and the goal of this is to bypass the password mechanism and get the success code. This blog post will focus on reversing/debugging the application and will not cover aspects of static analysis. The reason I wanted to write this is because I realised this topic is confusing for a lot of people and I wanted to try and write a blog t...
If you see successful 4624 event logs that look a little something like this in your Event Viewer showing an ANONYMOUS LOGON, an external IP (usually from Russia, Asia, USA, Ukraine) with an authentication package of NTLM, NTLMSSP, don't be alarmed - this is not an indication of a successful logon+access of your system even though it's logged as a 4624. If your server has RDP or SMB open publicly to the internet you may see a suite of these logs on your server's event viewer. Although these are showing up as Event ID 4624 (which generally correlates to successful logon events), these are NOT successful access to the system without a correlating Event ID 4624 showing up with an Account Name \\domain\username and a type 10 logon code for RDP or a type 3 for SMB. You can double check this by looking at 4625 events for a failure, within a similar time range to the logon event for confirmation. The reason for this is because when a user initiates an RDP or SMB c...
Comments
Post a Comment