Prepare for the AZ-700 Azure Networking exam!

In this post, I’ll tie together the study links from my related YouTube video on the AZ-700 exam topic. You can find the video I produced to help you prepare for this exam at https://aka.ms/YouTube/SC-900. (NOTE: As of 12/19/2021 only the first video in the series has been uploaded. The others will follow, so keep checking back!) The exam objectives which should guide your studying, and which were also used to guide the creation of the content, can be found at Exam AZ-700: Designing and Implementing Microsoft Azure Networking Solutions

As referenced in the introduction of the video, Microsoft MVP, Charbel Nemnom has another study guide with many solid reference links. It can be found at aka.ms/AZ-700StudyGuide.

Here are the slides to use for studying which contain all of the links provided and discussed in the video.

https://aka.ms/AZ-700Deck

Download the deck by clicking on the download icon on the right side, as shown below:

Please let me know if the content helps you pass!

SC-900 Exam Prep Material

In this post, I’ll tie together the study links from my related YouTube video on the SC-900 exam topic. You can find the video I produced to help you prepare for this exam can be found at https://aka.ms/YouTube/SC-900. (Edit 6/28/2021: YouTube video should be available now!) The exam objectives which should guide your studying, and which were also used to guide the creation of the content can be found at Exam SC-900: Microsoft Security, Compliance, and Identity Fundamentals – Learn | Microsoft Docs.

As referenced in the introduction part of the video, Microsoft MVP, Charbel Nemnom has another study guide with many solid reference links which can be found at aka.ms/SC-900StudyGuide.

Here are the slides to use for studying which contain all of the links provided and discussed in the video.

https://aka.ms/SC-900Deck

Download the deck as shown below:

SC-200 Exam Prep Material

I love taking certification exams…don’t you? It’s always fun to test your knowledge and see if you’ve been digging deep enough into the products on a daily basis to really understand them .

In this post, I’ll tie together the study links from my related YouTube video on the SC-200 exam topic. You can find the videos I produced for preparing for this exam at https://aka.ms/YouTube/SC-200. The exam objectives which should guide your studying, and which were also used to guide the creation of the video content can be found at Exam SC-200: Microsoft Security Operations Analyst – Learn | Microsoft Docs.

As referenced in the introduction part of the video, there was already another study guide out there with many solid reference links which can be found at aka.ms/SC-200StudyGuide.

Here are the slides to use for studying which contain all of the links provided and discussed in the video.

https://www.linkedin.com/smart-links/AQEKPGdQrCVGgw

Download the deck as shown below:

I’ll be posting SC-900 exam prep content shortly, so check back in a week or so.

Using DeTTECT and the MITRE ATT&CK Framework to Assess Your Security Posture

NOTE: Justin Henderson delivers some INCREDIBLE training on SIEM Tactical Analysis through SANS. This article is based on some points I learned during that course.

SIEM Training | SIEM with Tactical Analysis | SANS SEC555

– – – – – – – – – – – – – – – – – –

One of the things I’ve become very interested in lately is the MITRE ATT&CK Framework, which can be found at https://attack.mitre.org. The great thing about this tool is that it provides a real world, standardized method for understanding how adversaries attack specific platform types, including the tactics and techniques they typically leverage, the methods used by specific threat groups and so on.

This is valuable information for understanding the threat landscape and for understanding the way that attackers think. ATT&CK also gives you information about the data sources where you would typically need to look for indicators of compromise related to the types of attacks being carried out. For example, in the screenshot below, you can see that the indicators of compromise (IoC’s) for the “Phishing for Information” technique might be found in logs on your email gateway, your network intrusion detection system, or a packet capture (among other data sources).

This is really good information for understanding what data sources might be useful to look through during an incident, but it leaves one important question unanswered for SOC analysts: am I pulling the right logs into my SIEM (such as Azure Sentinel) in order to detect an adversary that is using a specific tactic?

Using DeTTECT

This is where a tool like DeTTECT, in conjunction with ATT&CK, comes into play.

DeTTECT is an open-source tool, located on GitHub (https://github.com/rabobank-cdc/DeTTECT). Its intent is to help SOC teams compare the quality of their data logging sources to the MITRE ATT&CK matrices in such a way that they can easily see if they have the proper logging coverage to allow them to detect adversaries, based upon the tactics they may be using. What makes DeTTECT so valuable is that it does this in a way that is easily understandable since it is very visual. This can help you understand where you can shore up your organization’s logging for specific types of threats that you may be facing. For example, in the screenshot above, maybe you’re logging traffic on your web proxy, but you aren’t capturing any logging on your email gateway. You could be missing a very valuable source of intelligence in your threat detection by not ingesting those logs into Azure Sentinel. DeTTECT can help you analyze your logging and find those weaknesses. Let’s see how.

The DeTTECT framework uses a Python tool, YAML administration files, the DeTTECT editor and several scoring tables to help you score your environment. The following steps were performed on a Ubuntu VM with the DeTTECT image running inside a Docker container. The instructions for setting up a container running DeTTECT are listed here:

https://github.com/rabobank-cdc/DeTTECT/wiki/Installation-and-requirements

As you see in the following screenshot, I’ve started up the DeTTECT Editor running in the container.

As the last line shows, I can now open the DeTTECT Editor in a URL and begin interacting with the editor. After I bring up the web page, my first step is to add Data Sources. This allows me to define for DeTTECT the logs I am capturing and ingesting into Azure Sentinel so that I can qualitatively measure the visibility I have across my logging sources.

Click on “New File”.

In this case, I want to tell DeTTECT how I’m capturing logs for Windows endpoints, so I click on “Add data source”.

Next, I define the data source as Windows event logs. Notice that the tool provides options for you to choose from as you type. For example, you could select AWS CloudTrail logs, Azure activity logs, Office 365 audit logs and numerous other sources. These options correspond to the logging sources referenced in the MITRE ATT&CK matrices, as we saw earlier.

I add the date the data source was registered and connected, define whether it is available for data analytics and whether it’s been enabled.

Know Thy Data!

Now comes the part where you must know your data. In the next section, I am defining the quality of the data that I’m getting from my Windows event logging configuration. For example, maybe I’m only capturing Windows System logs (reducing my overall visibility). In my example, I’m going to assume that I’m regularly logging data using Sysmon on all my devices, and I ingest it into Azure Sentinel on a regular basis. This gives me great visibility due to the completeness of the data that Sysmon can generate (that’s going to be another blog). I move the bars accordingly.

I go through the same process for all of my data sources, and I end up with something like what you see below.

In a production environment, the list would (hopefully) be much longer, assuming you have more than just a couple logging sources. Note that the quality and coverage of the logging is going to vary according to the system or tool you are evaluating. You may get lots of detailed information from one type of networking device, while another gives fairly limited logging information. Make sure that you reflect that difference in the data sources you add.

Once you’ve added all the data sources for your environment, click on “Save YAML file”.

File Conversion

The next step involves converting the YAML file into JSON, so it can be used in the ATT&CK Navigator tool. Back in the terminal window on my Ubuntu VM, I convert the YAML file into JSON.

The result is shown below:

Layering DeTTECT Data over the ATT&CK Matrix  

Now comes the fun part – seeing how your organization’s data logging sources match up to the ATT&CK Framework. This will give you a visual indicator of how much coverage and visibility you potentially have into different techniques and tactics used by adversaries.

We will be layering our data_sources_example_1.json file (created in the previous section) on top of the MITRE ATT&CK Navigator. First, let’s browse to the MITRE ATT&CK Navigator, located here: https://mitre-attack.github.io/attack-navigator/

I click on Open Existing Layer – – > Upload from Local, and then I browse to the location where I’ve stored my data_sources_example_1.json file.

And *whammo! * – look at the visualization I get!

What Does This Tell Me?

So how do I use this? Think of this as a heat map for logging coverage. The darkest purple boxes reflect the attack techniques that your organization would have the greatest visibility into, based on the quality of the logs you’re ingesting into Sentinel. The lighter the box, the less visibility your organization would have into an attacker using that technique, based on the current logging strategy.

This can be a great way to illustrate to management where you have gaps in your security logging, as it is very easy to understand the color-coding.

More importantly, perhaps, it gives you a starting point for developing a strategy for improving logging capabilities in your organization. As an example, I could look at this and immediately see that my fictional Contoso organization may have a significant amount of exposure, based on the fact that Initial Access – – > Supply Chain Compromise has no logging coverage at all.  

Therefore, I may want to increase my logging in that area of the environment. On the other hand, maybe that’s not an area where Contoso feels they have much of a vulnerability (although I think recent security events should lead Contoso to rethink that position!).

In reality, we are always going to have parts of our environment where the logging is healthy and robust, and we will also have areas where the logging isn’t quite what we would like it to be.

By using a tool like DeTTECT in conjunction with the ATT&CK Navigator, you can begin to prioritize where to increase your logging and continue to improve the security of your organization.

Keeping the Lights On: Business Continuity for Office 365

Early in my career at Microsoft, I worked in Microsoft Consulting Services, supporting organizations looking to deploy Exchange 2007 and 2010 in their on-premises environments. During those engagements, the bulk of the conversations focused on availability and disaster recovery concepts for Exchange – things like CCR, SCR and building out the DAG to ensure performance and database availability during an outage – whether it was a disk outage, a server outage, a network outage or a datacenter outage.

Those were fun days. And by “fun”, I mean “I’m glad those days are over”.

It’s never a fun day when you have to tell a customer that they CAN have 99.999% availability (of course – who DOESN’T want five 9’s of availability??) for their email service, but it will probably cost them all the money they make in a year to get it.

Back then, BPOS (Business Productivity Online Service) wasn’t really on the radar for most organizations outside of some larger corporate and government customers.

Then on June 28, 2011, Microsoft announced the release of Office 365 – and the ballgame changed. In the years since then, Office 365 has become a hugely popular service, providing online services to tens of thousands of customers and millions of users.

As a result, more businesses are using Office 365 for their business-critical information. This, of course, is great for our customers, because they get access to a fantastic online service, but it requires a high degree of trust on the part of customers that Microsoft is doing everything possible to preserve the confidentiality, integrity and availability of their data.

A large part of that means that Microsoft must ensure that the impact of natural disasters, power outages, human attacks, and so on are mitigated as much as possible. I recently heard a talk given that dealt with how Microsoft builds our datacenters and account for all sorts of disasters – earthquakes, floods, undersea cable cuts – even mitigations for a meteorite hitting Redmond!

It was an intriguing discussion and it’s good to hear the stories of datacenter survivability in our online services, but the truth is, customers want and need more than stories. This is evidenced by the fact that the contracts that are drawn up for Office 365 inevitably contain requirements related to defining Microsoft’s business continuity methodology.

Our enterprise customers, particularly those from regulated industries, are routinely required to perform business continuity testing to demonstrate that they are taking the steps necessary to keep their services up and running when some form of outage or disaster occurs.

The dynamics change somewhat when a customer moves to Office 365, however. These same customers now must assess the risk of outsourcing their services to a supplier, since the business continuity plans of that supplier directly impact the customer’s adherence to the regulations as well. In the case of Office 365, Microsoft is the outsourced supplier of services, so Microsoft’s Office 365 business continuity plans become very relevant.

Let’s take a simple example:

A customer named Contoso-Med has a large on-premises infrastructure. If business continuity testing were being done in-house by Contoso-Med and they failed the test, they would be held responsible for making the necessary corrections to their processes and procedures.

Now, just because Contoso-Med has moved those same business processes and data to Office 365, they are not absolved of the responsibility to ensure that the services meet the business continuity standards defined by regulators. They must still have a way of validating that Microsoft’s business continuity processes meet the standards defined by the regulations.

However, since Contoso-Med doesn’t get to sit in and offer comments on Microsoft’s internal business continuity tests, they must have another way of confirming that they are compliant with the regulations.

First…a Definition

Before I go much further, I want to clarify something.

There are several concepts that often get intermingled and, at times, used interchangeably: high availability, service resilience, disaster recovery and business continuity. We won’t dig into details on each of these concepts but suffice it to say they all have at their core the desire to keep services running for a business when something goes wrong. However, “business continuity and disaster recovery” from Microsoft’s perspective means that Microsoft will address the recovery and continuity of critical business functions, business system software, hardware, IT infrastructure services and data required to maintain an acceptable level of operations during an incident.

To accomplish that, the Microsoft Online Service Terms (http://go.microsoft.com/?linkid=9840733),which is sometimes referred to as simply the OST, currently states the following regarding business continuity:

  • Microsoft maintains emergency and contingency plans for the facilities in which Microsoft information systems that process Customer Data are located
  • Microsoft’s redundant storage and its procedures for recovering data are designed to attempt to reconstruct Customer Data in its original or last-replicated state from before the time it was lost or destroyed

 

Nice Definition. But How Do You Do It?

I’ve referenced the Service Trust portal in a few other blog posts and described how it can help you track things like your organization’s compliance for NIST, HIPAA or GDPR. It’s also a good resource for understanding other efforts that factor into the equation of whether Microsoft’s services can be trusted by their customers and partners.

A large part of achieving that level of trust relates to how we set up the physical infrastructure of the services.

To be clear, Microsoft online services are always on, running in an active/active configuration with resilience at the service level across multiple data centers. Microsoft has designed the online services to anticipate, plan for, and address failures at the hardware, network, and datacenter levels. Over time, we have built intelligence into our products to allow us to address failures at the application layer rather than at the datacenter layer, which would mean relying on third-party hardware.

As a result, Microsoft is able to deliver significantly higher availability and reliability for Office 365 than most customers are able to achieve in their own environments, usually at a much lower cost. The datacenters operate with high redundancy and the online services are delivering against the financially backed service level agreement of 99.9%.

The Office 365 core reliability design principles include:

  • Redundancy is built into every layer: Physical redundancy (through the use of multiple disk, network cards, redundant servers, geographical sites, and datacenters); data redundancy (constant replication of data across datacenters); and functional redundancy (the ability for customers to work offline when network connectivity is interrupted or inconsistent).
  • Resiliency: We achieve service resiliency using active load balancing and dynamic prioritization of tasks based on current loads. Additionally, we are constantly performing recovery testing across failure domains, and exercising both automated failover and manual switchover to healthy resources.
  • Distributed functionality of component services: Component services of Office 365 are distributed across datacenters and regions to help limit the scope and impact of a failure in one area and to simplify all aspects of maintenance and deployment, diagnostics, repair and recovery.
  • Continuous monitoring: Our services are being actively monitored 24×7, with extensive recovery and diagnostic tools to drive automated and manual recovery of the service.
  • Simplification: Managing a global, online service is complex. To drive predictability, we use standardized components and processes, wherever possible. A loose coupling among the software components results in a less complex deployment and maintenance. Lastly, a change management process that goes through progressive stages from scope to validation before being deployed worldwide helps ensure predictable behaviors.
  • Human backup: Automation and technology are critical to success, but ultimately, its people who make the most critical decisions during a failure, outage or disaster scenario. The online services are staffed with 24/7 on-call support to provide rapid response and information collection towards problem resolution.

These elements exist for all the online services – Azure, Office 365, Dynamics, and so on.

But how are they leveraged during business continuity testing?

Each service team tests their contingency plans at least annually to determine the plan’s effectiveness and the service team’s readiness to execute the plan. The frequency and depth of testing is linked to a confidence level which is different for each of the online services. Confidence levels indicate the confidence and predictability of a service’s ability to recover.

For details on the confidence levels and testing frequencies for Exchange Online, SharePoint Online and OneDrive for Business, etc… please refer to the most recent ECBM Plan Validation Report available on the Office 365 Service Trust Portal.

BC/DR Plan Validation Report – FY19 Q1

A new reporting process has been developed in response to Microsoft Online Services customer expectations regarding our business continuity plan validation activities. The reporting process is designed to provide additional transparency into Microsoft’s Enterprise Business Continuity Management (EBCM) program operations.

The report will be published quarterly for the immediately preceding quarter and will be made available on the Service Trust Portal (STP). Each report will provide details from recent validations and control testing against selected online services.

For example, the FY19 Q1 report, which is posted on the Service Trust Portal (ECBM Testing Validation Report: FY19 Q1), includes information related to 9 selected online services across Office 365, Azure and Dynamics, with the testing dates and testing outcomes for each of the selected services.

The current report only covers a subset of Microsoft cloud services, and we are committed to continuously improving this reporting process.

If you have any questions or feedback related to the content of the reporting, you can send an email to the Office 365 CXP team at cxprad@microsoft.com.

Additional Business Continuity resources are available on the Trust Center , Service Trust Portal, Compliance Manager and TechNet

  1. Azure SOC II audit report:  The Azure SOC II report  discusses business continuity (BC) starting on page 59 of the report, and the auditor confirms no exceptions noted for BC control testing on page 95.
  2. Azure SOC Bridge Letter Oct-Dec 2018 : The Azure SOC Bridge letter confirms that there have been no material changes to the system of internal control that would impact the conclusions reached in the SOC 1 type 2 and SOC 2 type 2 audit assessment reports.
  3. Global Data Centers provides insights into Microsoft’s framework for datacenter Threat, Vulnerability and Risk Assessments (TVRA)
  4. Office 365 Core – SSAE 18 SOC 2 Report 9-30-2018: Similar to the Azure  365 SOC II audit report (dated 10/1/2017 through 9/30/2018) discusses Microsoft’s position on business continuity (BC) in Section V, page 71 and the auditor confirms no exceptions noted for the CA-50 control test on page 66.
  5. Office 365 SOC Bridge Letter Q4 2018 : SOC Bridge letter confirming no material changes to the system of internal control provided by Office 365 that would impact the conclusions reached in the SOC 1 type 2 and SOC 2 type 2 audit assessment reports.
  6. Compliance Manager’s Office 365 NIST 800-53 control mapping provides positive (PASS) results for all 51 Business Continuity Disaster Recovery (BCDR)-related controls within Microsoft Managed Controls section, under Contingency Planning. For example, the Exchange Online Recovery Time  Objective and Recovery Point Objective (EXO RPO/RTO) metrics are tested by the third-party auditor per NIST 800-53 control ID CP2(3). Other workloads, such as SharePoint Online, were also audited and discussed in the same control section.
  7. ISO-22301  This business continuity certification has been awarded to Microsoft Azure, Microsoft Azure Government, Microsoft Cloud App Security, Microsoft Intune, and Microsoft Power BI. This is a special one. Microsoft is the first (and currently the ONLY) hyperscale cloud service provider to receive the ISO 22301 certification, which is specifically targeted at business continuity management. That’s right. Google doesn’t have it. Amazon Web Services doesn’t have it. Just Microsoft.
  8. The Office 365 Service Health TechNet article provides useful information and insights related to Microsoft’s notification policy and post-incident review processes
  9. The Exchange Online (EXO) High Availability TechNet article outlines how continuous and multiple EXO replication in geographically dispersed data centers ensures data restoration capability in the wake of messaging infrastructure failure
  10. Microsoft’s Office 365 Data Resiliency Overview outlines ways Microsoft has built redundancy directly into our cloud services, moving away from complex physical infrastructure toward intelligent software to build data resiliency
  11. Microsoft’s current SLA commitments for online services
  12. Current worldwide up times are reported on Office 365 Trust Center Operations Transparency
  13. Azure SLAs and uptime reports are found on Azure Support

As you can see, there are a lot of places where you can find information related to business continuity, service resilience and related topics for Office 365.

This type of information is very useful for partners and customers who need to understand how Microsoft “keeps the lights on” with its Office 365 service and ensures that customers are able to meet regulatory standards, even if their data is in the cloud.

 

Every Question Tells a Story – Mitigating Ransomware Using the Rapid Cyberattack Assessment Tool: Part 3

In the previous two posts in this series, I explained how to prepare your environment to run the Rapid Cyberattack Assessment tool, and I told you the stories behind the questions in the tool.

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-1/

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-2/

Let’s finish up with the final steps in running the Rapid Cyberattack Assessment tool and a review of the output.

Specifying Your Environment

We’ve now finished all the survey questions in the assessment. Now we need to tell the tool which machines to go out and assess as the technical part of the assessment.

There are three ways to accomplish this:

Server Name: You can enter all the names of the machines you want to assess manually in the box, separated by commas, as shown below. This is only practical if you are assessing less than 10 machines. If you’re assessing more than that, I’d recommend that you use one of the other methods or you’ll get really tired of typing.

 

 

File Import: Let’s say you want to assess 10 machines from 10 different departments. You could easily do this by putting all the machine names into a standard text file, adding one machine per line.

In the screenshot below, I have a set of machines in the file named INSCOPE.TXT, located in a folder named C:RCA. This is a good way to run the assessment if the machines are spread across several OU’s in Active Directory, which would make the LDAP path method less viable as a way of targeting machines. But again, it could be a lot of typing (unless you do an export from Active Directory – then it’s super easy).

 

 

LDAP Path: If you have a specific OU in Active Directory that you want to target, or if you have less than 500 machines in your entire Active Directory and just want to target all of them, the easiest way to do that would be with the LDAP path method. Simply type the LDAP path to the target OU, or to the root of your Active Directory, as shown in the screenshot below:

 

NOTE:You only define your “in-scope” machines using ONE of the options noted above.

Click Next and run the assessment.

 

As you see, the assessment goes out and collects data about the machines in the environment, and then it generates a set of reports for you to review.

 

 

Click on View Reports to see the results of the assessment.

Notice that there are four files created. In my screenshot, you’ll also notice that the files don’t have the “official” Office icons – they just look like text files. This is because I don’t have Office installed on the Azure VM that is running the assessment. I can just copy the machines to a machine with Office installed and open them from there. But as you see, there are two Excel spreadsheets, a Word document and a PowerPoint deck. These are all created and populated automatically by the tool.

 

 

Let’s take a look at the tool’s findings.

Rapid Cyberattack Assessment Affected Nodes spreadsheet

First, let’s open the RapidCyberattackAssessmentAffectedNodes.xlsx spreadsheet.

In this spreadsheet you have several tabs along the bottom. The first tab is named “Host”, and it shows the names of the machines it was able to contact during the assessment, their operating system build version, install date and last boot-up time. All pretty standard stuff.

 

The second tab is for “Installed Products“, and this is a comprehensive listing of all the installed software found on the machines in the assessment. This is one way of verifying the question in the survey about whether you are keeping all your apps and middleware up to date. As you can see in my screenshot, there’s some software running on my lab machines that is several versions old, and the spreadsheet tells me which machine that software is running on. This is all stuff that could easily be collected by a network management tool like System Center Config Manager, but not every company has that kind of tool, so we provide this summary.

 

The third tab is the “Legacy Computer Summary”, which tells you how many of the machines in the assessment are running operating systems that are no longer supported by Microsoft. In my case, I had none. The Active Count and the Stale Count columns simply tell you whether the machine is being logged on to regularly or if perhaps it is simply a stale object in Active Directory and can just be deleted.

 

 

The “Legacy Computer Details” tab would give you more information about those legacy computers and could potentially help you determine what they are being used for.

The “Domain Computer Summary” tab is a summary of how many machines on your network are running current operating system versions.

Rapid Cyberattack Assessment Key Recommendations document

Now let’s go to the Rapid Cyberattack Assessment Key Recommendations Word document.

 

As you can see, this is a nice, professional looking document with an extensive amount of detail that will help you prioritize your next steps. One of the first things we show you is your overall risk scorecard, with your risk broken down into four major categories. In my case, I’ve got some serious issues to work on.

 

 

But then we start helping you figure out how to approach the problems. We show you which of your issues are most urgent and that require your attention within the next 30 days. Then we show you the mid-term projects (30+ days), and finally Additional Mitigation Projects that may take a more extended period of time, or that don’t have a set completion date (such as ensuring that the security of your partners and vendors meets your security requirements). By giving you this breakdown, a list of tasks that could seem overwhelming (such as what you see in my environment below) is somewhat more manageable.

 

 

We then get more granular and give you a listing of the individual issues in the Individual Issue Summary.

You’ll notice that each finding is a hyperlink to another location in the document, which provides you with a status on the issue, a description of the issue, it’s potential impact and (for some of the issues) which specific machines are impacted by the issue.

This is essentially the comprehensive list of all the things that should be addressed on your network.

So how do you track the progress on this?

Excel Resubmission Report spreadsheet

That’s the job of the Excel Resubmission Report.xlsx file. This file is what you would use to track your progress on resolving the issues that have been identified. In this spreadsheet you have tabs for “Active Issues” (things that require attention), “Resolved Issues (things you’ve already remediated) and “Not Applicable Issues” (things that don’t apply to your environment).

 

This spreadsheet is a good way for a project manager to see at a high level what progress has been made on certain issues and where more manpower or budget may need to be allocated.

Rapid CyberAttack Assessment Management Presentation PowerPoint deck

This is the deck – only a few short slides – that provides a high-level executive summary of all the things you identified and how you intend to approach their resolution. This is a very simple deck to prepare and can be used as a project status update deck as well.

 

Every Question Tells a Story

So that’s the Rapid Cyberattack Assessment tool in its entirety. It isn’t necessarily the right tool for a huge Fortune 100 company to use to perform a security audit; there are much more comprehensive tools available (and they usually are quite expensive, which this tool is NOT).

But for the small-medium sized businesses who simply want to understand their exposure to ransomware and take some practical steps to mitigate that exposure, this tool is a great starting point.

Every Question Tells a Story – Mitigating Ransomware Using the Rapid Cyberattack Assessment Tool: Part 1

They say that a picture is worth 1,000 words.

But in some cases, the questions that you ask can also help tell a very interesting story.

Let me explain.

All of us are familiar with the devastating effects of ransomware that we saw last year in the WannaCry, Petya, NotPetya, Locky and SamSam ransomware attacks. We read the stories of the massive financial impact these attacks had on their victims, and we can only imagine the stress that the individuals in the IT departments of the impacted organizations went through trying to recover.

You may know that Microsoft has created a tool called the Rapid Cyberattack Assessment. The intent of the tool is to help an organization understand the potential vulnerabilities and exposures they have to ransomware attacks so that they can take steps to keep from being the next victim.

But like I said – every question tells a story – and in this tool there are many questions that an IT admin needs to ask himself or herself, and there’s a story behind each of these questions that helps make the tool’s value evident.

Let’s take a look at the tool and as we go through the tool I’ll try to give you the story behind each question.

Preparing the Environment

First, let’s download the tool itself.

It’s a free download from Microsoft, located here:

https://www.microsoft.com/en-us/download/details.aspx?id=56034

It’s important to download both the executable (RCA.exe) and the requirements document. The requirements document is also important, because if you don’t set up the tool correctly as well as the target machines, you likely won’t get some information that’s very valuable.

Conditions

First of all, you need to be aware that the Rapid Cyberattack Assessment tool runs in an Active Directory environment, and against Windows machines only. Any machines that you target with the tool must be part of an Active Directory domain. Additionally, the tool is limited in scope to 500 machines.

What should you do if your environment is larger than that?

There are really two simple options:

  1. Assess your entire environment in 500 machine chunks. Run the tool against a specific OU or group of OU’s that total no more than 500 machines. You can also just create lists (maybe exported from a spreadsheet) and use that as input for the tool. This method will definitely take a little bit of time and it won’t give you a single, comprehensive report view, either.
  2. Take sample machines from a number of different departments and run the tool against them. With this method, you would take (as an example) 50 machines from HR, 50 machines from Finance, 50 machines from Sales, etc…and run the tool against them. It doesn’t capture data on every single machine in the environment, but the tool is designed to give you an idea of where your exposures lie, and that would most likely be revealed in even a random sampling of machines. You can reasonably assume that any vulnerabilities identified in that subset of machines likely exists elsewhere in the environment as well.

Personally, if I was running the tool in my own environment and we had more than 500 machines, I would choose the second option. It gives me a rough idea of the issues I have to resolve and helps me prioritize them. If my environment has more than 500 machines, I’m probably managing them with some sort of automation tool anyway (like System Center Configuration Manager), so I don’t have to know exactly how each machine is configured. I’ll just define what the standard should be and push out that configuration.

Hardware and Software

Installing the Rapid Cyberattack Assessment tool itself isn’t hard at all. You simply run the RCA.exe executable. There aren’t any options or choices to make other than agreeing to the license terms. Likewise, the machine you run the tool from doesn’t have a lot of requirements. It should be:

  1. Server-class or high-end workstation running Windows 7/8/10 or Windows Server 2008 R2/2012/2012 R2/2016.
  2. It’s preferable to have a machine with 16 GB+ of RAM, a 2 GHz+ processor and at least 5 GB disk space.
  3. The machine should be joined to the Active Directory domain you will be assessing.
  4. Microsoft .NET Framework 4.0 must be installed
  5. Optionally, you may want Word, PowerPoint and Excel installed to view the reports. But you can also just export the reports and view them on a machine that has Office installed already.

Account Rights

The service account you use to run the tool needs to be a domain user who has local administrator permissions to all the machines within the scope of the assessment. The account should also have read access to the Active Directory forest that the in-scope computers are joined to.

Network Access

The machines you are trying to assess obviously must be reachable by the assessing machine. Therefore, there must be unrestricted access from the tools machine to any of the in-scope domain joined machines. By “unrestricted access” we mean you should make sure there are no firewall rules or router ACLs that would block access to any of the following protocols and services:

  • Remote Registry
  • Windows Management Instrumentation (WMI)
  • Default admin shares (C$, D$, IPC$)

If you are using Windows Advanced Firewall on the in-scope machines, you may need to adjust the firewall to allow the assessment tool to run.

You can configure this using a Group Policy targeted at the in-scope machines. To do this, follow these steps.

  1. Use an existing Group Policy object or create a new one using the Group Policy Management Tool.
  2. Expand the Computer Configuration/Policies/Windows Settings/Security Settings/Windows Firewall with Advanced Security/Windows Firewall with Advanced Security/Inbound Rules
  3. Check the Predefined:radio button and select Windows Management Instrumentation from the drop-down list. Click 
  4. Check the WMI rules for the Domain Profile. Click Next
  5. Check the Allow the Connectionradio button and click Finish to exit and save the new rule.
  6. Make sure the Group Policy Object is applied to the relevant computers using the Group Policy Management Tool

You would then do the same thing for: Allow file and Print sharing exceptions

For the Remote Registry Service, you want to set the service to Automatic startup for the duration of the assessment.

  1. Open the Group Policy editor and the GPO you want to edit.
  2. Expand Computer Configuration > Policies > Windows Settings > Security Settings > System Services
  3. Find the Remote Registry item and change the Service startup mode to Automatic
  4. Reboot the clients to apply the policy

That should be enough to get you started.

In the next post, I’ll walk you through the survey questions in the Rapid Cyberattack Assessment tool.

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-2/

I think you’ll find the stories revealed by the questions to be quite interesting.

Secure Your Office 365 Tenant – By Attacking It (Part 2)

By David Branscome

 

In my previous post (https://blogs.technet.microsoft.com/cloudyhappypeople/2018/04/04/secure-your-office-365-tenant-by-attacking-it ), I showed you how to use the Office 365 Attack Simulator to set up the Password Spray and Brute Force Password (Dictionary) Attacks.

What we often find, though, is that spear phishing campaigns are extremely successful in organizations and are often the very first point of entry for the bad guys.

Just for clarity, there are “phishing” campaigns and there are “spear phishing” campaigns.

A phishing campaign is typically an email sent out to a wide number of organizations, with no specific target in mind. They are usually generic in nature and are taking the approach of “spreading a wide net” in hopes of getting one of the recipients to click on a URL or open an attachment in the email. Think of the email campaigns you’ve likely seen where a prince from a foreign country promises you $30 million if you’ll click on this link and give him your bank account information. The sender doesn’t particularly care WHO gets the email, as long as SOMEBODY clicks on the links.

On the other hand, a spear phishing campaign is much more targeted. In a spear phishing campaign, the attacker has a specific organization they are trying to compromise – perhaps even a specific individual. Maybe they want to compromise the CFO’s account so that they can fraudulently authorize money transfers from the organization by sending an email that appears to be coming from the CFO. Or maybe they want to compromise a highly-privileged IT admin’s email account so that the attacker can send an email asking users to browse to a fake password reset page and harvest user passwords. The intent with a spear phishing campaign is to make the email look very legitimate so that the recipients aren’t suspicious – or perhaps they even feel obligated to do as instructed.

What Do I Need?

As you can imagine, setting up a spear-phishing campaign takes a little more finesse than a brute force password attack.

First, decide WHO the sender of the spear phishing email will be. Maybe it’s HR requesting that you log in and update your benefits information. Or perhaps it’s the IT group asking everyone to confirm their credentials on a portal they recently set up.

Next, decide WHO you want to target with the campaign. It may be the entire organization, but keeping a low profile as an attacker also has its advantages.

You’ll probably want to use a realistic HTML email format so that it looks legitimate. The Attack Simulator actually provides two sample templates for you, as we’ll see below. Using the sample templates makes the campaign very easy to set up, but as you get more comfortable using the Attack Simulator, you will likely want to craft your own email to look more like it’s coming from your organization.

That should be enough to get us started.

Launching a Spear Phishing Attack

In the Attack Simulator console, click on “Launch Attack”.

 

At the Provide a name to the campaign page, choose your own name, or click on “Use template”. If you click on “Use template” you will see two template options to choose from. I’ve chosen “Prize Giveaway” below:

 

Next, select the users you want to “phish”. You can select individual users or groups.

 

 

On the next page, if you’ve selected a template, all the details will be filled in for you. One important value to note here is the Phishing Login server URL. Select one of the phishing login servers from the drop down. This is the way the attack simulator is able to track who has clicked on the URL in the email and provides reporting.

Note that the URL’s for the phishing login servers are NOT actually bad sites. They are sites set up specifically for the purposes of this tool’s functioning.

 

 

In the Email body form, you can customize the default email. Make sure that you have a variable ${username} so that the email looks like it was sent directly to the end user.

 

 

Click Confirm, and the Attack Simulator will send the email out to the end users you specified.

Next, I opened the Administrator email account that I targeted and saw this:

 

 

Notice that it customized the email to the MOD Administrator account in the body of the email.

If I click on the URL (which points to the http://portal.prizesforall.com URL we highlighted earlier) I get sent to a website that looks like this.

 

Finally, if I click on the reporting area of the Attack Simulator, I can see who has clicked on the link and when.

 

 

Okay. But seriously…would you really have clicked on that URL?

Probably not.

So how do you make it a little more sophisticated?

Let’s create a more realistic attack.

In this attack we will use the Payroll Update template, which is very similar to what you might actually see in many corporate environments.  You can also create your own HTML email using your organization’s branding and formatting.

 

 

I’ll again target the MOD Administrator because he seems like a good target, since he’s the O365 global admin (and seems to be somewhat gullible).

In this situation though, instead of sending from what appears to be an external email address (prizes@prizesforall.com, used in the previous attack) I’m going to pose as someone the user might actually know. It could be the head of HR or Finance or the CEO. I’ll use the actual email address of that person so that it resolves correctly.

Notice that this templates uses a different phishing login server URL from the drop down. You’ll see why in a second.

In the Email body page, we’ve got a much more realistic looking email.

It should be noted, though, that if you make the email look ABSOLUTELY PERFECT and people click on the URL, what have they learned? It’s best to provide a clue in the email that a careful user would notice and recognize as a problem. Maybe send the official HR email from someone who isn’t actually in HR, or leave off a footer in the email that identifies it as an official HR email. Whatever it is, there should be something that you can use to train users to look out for.

So if you read the email template below carefully, you’ll see some grammatical errors and misspellings that should be a “red flag” to a careful user.

 

 

Again, you Confirm the settings for the attack and the attack launches.

Going to the MOD Administrator’s mailbox….that’s much more realistic, wouldn’t you say?

 

 

When I click on the “Update Your Account Details” link, I get sent to this page, where I’m asked to provide a username and password, which of course, I dutifully provide:

 

 

Notice, however, that the URL at the top of the page is the portal.payrolltooling.com website – even thought the page itself looks like a Microsoft login page. Many attacks will mimic a “trusted site” to harvest credentials in this manner. When you’re testing you can use any email address (legitimate or not) and any password for testing – it isn’t actually authenticating anything.

Once I enter some credentials, I am directed to the page below, which lets me know I’ve been “spear phished” and provides some hints about identifying these kind of attacks in the future:

 

 

And finally, in the reporting, I see that my administrator was successfully spear phished.

 

The Value of Attack Simulations

This is all interesting (and a little bit fun) but what does it really teach us? The objective is that once we know what sort of attacks our users are vulnerable to (password or phishing are the two highlighted by this tool), then we can provide training to help enhance our security posture. Many of the ransomware attacks that are blanketing the news lately started as phishing campaigns.

If we can take steps to ensure that our users are better equipped to identify suspicious email, and help them select passwords that aren’t easily compromised, we help improve the organization’s security posture.

 

 

 

 

 

Secure Your Office 365 Tenant – By Attacking It (Part 1)

By David Branscome

I’ve been waiting several months for this day to arrive. The Office 365 Attack Simulator is LIVE!

If you log into your Office 365 E5 tenant with the Threat Intelligence licensing, it shows up here in the Security & Compliance portal.

When you click on it, the first thing it will tell you is that there are some things you need to set up before you can run an actual attack. There’s a link that says, “Set up now” (in the yellow box shown below). After you click that link, it says the setup is complete, but you’ll have to wait a little while before running an attack. (I only had to wait about 10 minutes when I set it up)

 

It also reminds you that you need to have MFA (multi-factor authentication) set up on your tenant in order to run an attack. This makes a lot of sense, since you want to ensure that anyone who runs the attack is a “good guy” on your network.

To set up MFA, follow the steps here:

Go to the Office 365 Admin Center

Go to UsersActive users.

Choose MoreSetup Azure multi-factor auth

 

Find the people who you want to enable for MFA. In this case, I’m only enabling the admin account on my demo tenant.

Select the check box next to the people you want to enable for MFA.

On the right, under quick steps, you’ll see Enable and Manage user settings.

Choose Enable.

 

 

 

In the dialog box that opens, choose enable multi-factor auth.

The Attacks

Spear Phishing

With a spear phishing attack, I’m sending an email to group of “high-value” users – maybe my IT admins, the CEO/CFO, the accounting office, or some other user group whose credentials I want to capture. The email I send contains a URL that will allow me to capture user credentials or some other sensitive data as part of the attack. When I set up this attack, it needs to look like it’s coming from a trusted entity in the organization. Maybe I’ll set it up to make it appear as though it’s coming from the IT Security group asking them to verify their credentials.

Brute Force Password (a.k.a., Dictionary Attack)

In this attack, I’m running an automated attack that just runs through a list of dictionary-type words that could be used as a password. It is going to use lots of well-known variations, such as using “$” for “s” and the number 0 for the letter O. If you thought Pa$$w0rd123 was going to cut it as a secure password on your Office 365 account, this attack will show you the error of your ways.

This type of attack is pretty lengthy in nature because there are thousands of potential guesses being made against each user account. The attack can be set up to vary in frequency (time between password guesses) and number of attempts.

It’s important to note that if a password is actually found to be successful, that password is NOT exposed to anyone – even the admin running the attack. The reporting simply indicates that the attack was successful against Bob@contoso.com, for example.

Password Spray Attack

A password spray attack is a little different from the brute force password attack, in that it allows the admin/attacker to define a password to use in the attack. These would typically be passwords that are meaningful in some way – not simply an attempt using hundreds, or thousands of guesses. The password you use could be something like the name of a football team mascot and the year they won a championship, or the name of a project that people in one department are working on. Whatever criteria you select, you define what password or passwords should be attempted and the frequency of the attempts.

Ready? Let’s go hunting…

Launching a Password Spray Attack

First, I’ll try the password spray attack. I’ve set up several accounts in my test tenant with passwords that are similar to the one I’ll attempt to exploit – which is Eagles2018!. Notice that, by most criteria, that’s a complex password – upper and lower case, alphanumeric and it includes a special character, but it’s also a fairly easily-guessed password, since the Philadelphia Eagles won the Super Bowl in 2018 (though it pains me to say that).

I’ve set up a couple users with that password to ensure I get some results.

I go to my Attack simulator and click on Launch Attack.

The first screen is where I name the attack.

 

 

Next, I select the users I want to target. Notice that I can select groups of users as well.

 

 

Now I manually enter the passwords I want to use in the attack.

 

 

Confirm the settings, click Finish and the attack will begin immediately.

If I go back to my Attack Simulator console, I can see the attack running.

 

 

After the attack completes, I see the users who have been compromised using the password.

(Yes, I’ve reset their passwords now, so don’t try and get clever.) 😊

Now I politely encourage ChristieC and IrvinS to change their password to help ensure their account security.

Launching a Brute Force Password (Dictionary Attack)

Again, I’ve set up a couple accounts with some pretty common password combinations (P@ssword123, P@ssw0rd!, etc..)

I walk through the configuration of the attack, which is very similar to the Password Spray attack setup.

 

I set up my target users as before, and then I choose the attack settings.

In this case, I uploaded a text file containing hundreds of dictionary passwords, but you can create a sampling of several passwords by entering them manually one at a time in the field above the Upload button.

 

As the attack runs, you’ll see something like the screenshot below. Remember, if you have a large number of users and a very large wordlist for the dictionary attack, this attack will run for quite some time as the simulator cycles through all the possible variations for each user.

 

And again, when the simulation is complete, you’ll want to caution DiegoS on his lack of good password hygiene.

In my second blog post, I’ll show you how to do a Spear Phishing Attack. These are the REALLY sneaky ones….

Stay tuned!

 

The End of Support for Older TLS Versions in Office 365

by David Branscome, with a callout to Joe Stocker at Patriot Consulting for the heads-up!

The SSL/POODLE Attack Explained

UPDATE: As per the support article listed here (https://support.microsoft.com/en-us/help/4057306/preparing-for-tls-1-2-in-office-365) We will be extending support for TLS 1.0/1.1 through October 31, 2018 in order to help ensure our customers are adequately prepared for the changes.

 

As most of you know, there was a significant vulnerability identified in the SSL 3.0 protocol back in 2014, named POODLE (Padded Oracle On Downgraded Legacy Encryption).

The problem was this: SSL 3.0 is basically an obsolete and insecure protocol. As a result, it has been, for the most part, replaced by its successors, TLS 1.0 and TLS 1.2. The way a client-server encryption negotiation sequence would typically work is that the client would contact a server, and through a handshake process, agree on the highest level of security over which they both can communicate. So, for example, a client makes a request to a server and says, “I’d like to use TLS 1.2 for our communication, but I can also use TLS 1.0, if you need to.” The server responds with, “I don’t speak TLS 1.2, but I do speak TLS 1.0, so let’s agree to use that.” They then use that downgraded protocol as their preferred encryption method. The downgrade sequence could ALSO downgrade the encryption to use SSL 3.0, if necessary.

However, even in situations where client and server both support the use of the newer security protocols, an attacker with access to some portion of the client-side communication could disrupt the network and force a downgrade to the SSL 3.0 encryption. This is typically referred to as a man-in-the-middle attack, because the attacker sits on the network between two parties and captures their communication stream. This is an altogether separate type of attack, unrelated to the POODLE vulnerability itself, and must be defended against using other methods.

Anyway, now that the attacker has successfully forced SSL 3.0 encryption to be used, and the attacker has access to the communication stream, the attacker can attempt the POODLE attack and get access to decrypted information between the client and the server.

When this vulnerability came out, there was a significant amount of work done worldwide to mitigate the impact and scope of the issue. The vulnerability in SSL 3.0 itself couldn’t be remediated because the issue was fundamental to the protocol itself. Because of this, the best solution for organizations was simply to disable support for SSL 3.0 in their applications and systems.

So That Was 3 Years Ago….

As described in the links at the bottom of this article, Microsoft still supports the use of TLS 1.0 and 1.1 for clients connecting to the Office 365 service. However, due to the potential for future downgrade attacks similar to the POODLE attack, Microsoft is recommending that dependencies on all security protocols older than TLS 1.2 be removed, wherever possible. This would include TLS 1.1/1.0 and SSL v3 and V2.

The problem here is that many operating systems and applications have a hardcoded protocol version to ensure interoperability or supportability. In Windows 8 and Windows Server 2012 and higher, the default protocol that is used is TLS 1.2 – which is good.

However, in Windows 7 and Windows 2008 R2, TLS 1.0 was the default protocol. In fact, TLS 1.1. and 1.2 were actually configured as “disabled”. See the table below:

 

 

As outlined in the article “Preparing for the mandatory use of TLS 1.2 in Office 365”, this is going to present a problem if your organization is still using Windows 7/Vista clients. Why?

Because on October 31, 2018, Microsoft Office 365 will be disabling support for TLS 1.0 and 1.1. This means that, starting on October 31, 2018, all client-server and browser-server combinations must use TLS 1.2 or later protocol versions to be able to connect without issues to Office 365 services. This may require certain client-server and browser-server combinations to be updated.

Our internal telemetry of client connections indicates that this shouldn’t be a problem for most organizations, since the majority are not using TLS 1.0 or 1.1, anyway. However, for the network you manage it’s probably a good idea not to simply assume everything will be great. 😊

As an example, if you’re using any on-premises infrastructure for hybrid scenarios or Active Directory Federation Services, make sure that these infrastructures can support both inbound and outbound connections that use TLS 1.2.

How Do I Know if I Need to Take Action?

A new IIS functionality makes it easier to find clients on Windows Server 2012 R2 and Windows Server 2016 that connect to the service by using weak security protocols.

https://cloudblogs.microsoft.com/microsoftsecure/2017/09/07/new-iis-functionality-to-help-identify-weak-tls-usage/

There are also some simple checks available from Qualys Labs to check browser compatibility – https://www.ssllabs.com/ssltest/viewMyClient.html as well as the certificate and encryption configuration on your servers with SSL certificates – https://www.ssllabs.com/ssltest/ .

Hopefully these checks will help you to ensure that your organization is ready when the change is made to the Office 365 services early next year.

Additional Resources

Preparing for the mandatory use of TLS 1.2 in Office 365

https://support.microsoft.com/en-us/help/4057306/preparing-for-tls-1-2-in-office-365

Solving the TLS 1.0 Problem

https://www.microsoft.com/en-us/download/confirmation.aspx?id=55266 

Disabling TLS 1.0/1.1 in Skype for Business Server 2015 – Part 1 and 2

https://blogs.technet.microsoft.com/nexthop/2018/04/18/disabling-tls-1-01-1-in-skype-for-business-server-2015-part-1/

https://blogs.technet.microsoft.com/nexthop/2018/04/18/disabling-tls-1-01-1-in-skype-for-business-server-2015-part-2/

Implementing TLS 1.2 Enforcement with SCOM

https://blogs.technet.microsoft.com/kevinholman/2018/05/06/implementing-tls-1-2-enforcement-with-scom/

Exchange Server TLS Guidance

https://blogs.technet.microsoft.com/exchange/2018/01/26/exchange-server-tls-guidance-part-1-getting-ready-for-tls-1-2/

https://blogs.technet.microsoft.com/exchange/2018/04/02/exchange-server-tls-guidance-part-2-enabling-tls-1-2-and-identifying-clients-not-using-it/

https://blogs.technet.microsoft.com/exchange/2018/05/23/exchange-server-tls-guidance-part-3-turning-off-tls-1-01-1/

Intune TLS Guidance

https://blogs.technet.microsoft.com/intunesupport/2018/06/05/intune-moving-to-tls-1-2-for-encryption/

Preparing for TLS 1.0/1.1 Deprecation – O365 Skype for Business

https://techcommunity.microsoft.com/t5/Skype-for-Business-Blog/Preparing-for-TLS-1-0-1-1-Deprecation-O365-Skype-for-Business/bc-p/223608