Sunday, July 28, 2024

How to fix Mitre D3fend?

Praise

Mitre Att&ck framework has been a real game changer for detection engineering and for my friends in the Security Operations Center!

First big benefit is that it allows defenders to rapidly parse and understand threat intelligence as it's in a known pre-understood format! Threat intelligence not leveraging Mitre Att&ck framework is just plain annoying to read.

New TTPs can be added without messing up the "attack chain" format. These TTP "abuse cases" can be mapped to "use cases" or playbooks in the SOC.

Att&ck is very useful for SOC teams responding to security incidents resultant from control gaps or control failures especially leveraging Endpoint Detection & Response/XDR and Security Information Event Management technologies.

The desire for improvement

Now about D3fend, wouldn't it be great to have a similar extensible frame of reference for security architects who are building in defense in depth and attack chain disruption that we could use internationally, between clients etc.

The challenges

1. We operate in a microsoft monoculture. What institutions and enterprises do not utilise entraID, Microsoft365 and Windows 11 for their end user productivity environment? How to avoid this being a microsoft best practices summary?
2. A lot of security for this monoculture is delivered via bolt on third party security products or now optional microsoft security products. How to be vendor neutral? I have a personal preference for "OS native endpoint security"
3. This environment is not secure by default due to the desire by microsoft to support backwards compatibility "out of the box". Resulting in whacky scenarios where we are deploying "deception technology" to protect legacy identity technology.
4. COTS vs in house developed applications - Maybe we don't dive into secure application development in the SDLC, DevSecOps and the like in this model.
5. Vulnerability Management - let's exclude technologies that help us identify root causes

Key controls

Thank goodness that there are certain key controls that are partially effective against a number of "abuse cases"

These are:
1. Reducing root causes:
a) Patching Applications on Endpoint and Servers
b) Patching Operating Systems on Endpoint and Servers
c) Removing tech and security debt in IDAM - kill your Active Directory!
d) Patching Network Appliances
2. Protective Technologies
a) Secure Email Gateway
b) Secure Web Gateway
c) Application Control
d) Antimalware
e) Secrets management and secrets scanning delivering enforcement of their use
f) Cloud Identity & Access Management - that establishes a perimeter in public cloud hosting - governed by Cloud Infrastructure Entitlement Management (CIEM) capability in a CNAPP
g) Software Composition Analysis - detect and prevent known bad supply chain attacks
3.Technologies that prevent attacker lateral movement and app access after they get a foothold on an endpoint or a DMZ hosted system or in the bowels of an database via SQL injection.
a) IDaaS with Multi Factor Authentication and Conditional Access -Zero Trust Policy Enforcement Point
b) Zero Trust Network Access (because VPN appliances suck and identity driven access to network attack surface is very powerful esp. if implemented per application leveraging Identity Governance & Administration technology)
c) Macrosegmentation and Next Generation Firewall
d) Microsegmentation
4. Detective Technologies
a) Endpoint Detection & Response and XDR
b) SIEM and SOAR and EUBA

Abuse cases

We want to illustrate the attack chain disruption possible with these key controls across a set of abuse cases or attack chains:
1. Phishing email with link or attachment
2. Exploit of known vulnerability in an internet facing network appliance - eek - what controls apart from patching help with this!
3. Exploit of known vulnerability in COTS software
4. Supply Chain attack with phone home to C&C infrastructure
5. Stolen/leaked creds against API

Proposed working model for improvement










Abuse Case Root Cause Remediation  Protective Technologies Lateral Deeper Movement Prevention Detective Technologies
Phishing Link in Email to actions on objective N/A Secure Email Gateway - Known Bad Sender
Secure Email Gateway - Known Bad link
Secure Email Gateway - Sandbox
Secure Web Gateway - Known Bad URL
Secure Web Gateway - Sandbox

N/A N/A
Malware attachment in Email to actions on objective Patch Endpoint Operating System
Patch End User apps on Operating System
Secure Email Gateway - Sandbox N/A N/A
Exploit on internet facing appliance to actions on objective Patch VPN Appliance or Use Auto Patch ZNTA ? ? Logs?
Supply Chain Attack Y Secure Web Gateway - least privilege internet access for workload identities
Software Composition Analysis - block known bad update to cached repo
Software Composition Analysis - antimalware scan of repo
N/A N/A

Thursday, September 7, 2023

Repost from 2009 - what do you need to know to work in infosec?

 Here's a list of things that are really handy to know for the day to day business of information security. Note, if you know how to do these things then learning to review them is simply applying "audit methodology". Hope this list will be useful for myself as a refresher and to others wanting to further their skills:


1. TCP/IP basics like OSI model, routing, protocols, ports, NAT
2. Construct a checkpoint firewall rule base
3. Construct a PIX firewall rule set
4. Configure a cisco router to CIS benchmark
5. Configure VLANs and port mirroring on a cisco switch
6. Deploy Microsoft security templates to a group policy object
7. Configure a WSUS server and run MBSA to check it is working
8. Use Solaris Security Toolkit
9. Administer a linux box, enable/disable services, use package managers etc.
10. Install oracle and mysql
11. Be able to construct an SQL query or two
12. Configure a web server or two (say apache and IIS)
13. Configure an application server or three (say tomcat, websphere application server, maybe BEA weblogic)
14. Be able to use a web proxy (burp, webscarab) and a fuzzer
15. Know how the following security controls of authentication, session management, input validation and authorisation are implemented securely for a number of application development frameworks
16. Configure an IDS or three (Snort, IBM solution set)
17. Know the ten domains in ISO27002 and their content
18. Be able to identify control gaps from ISO27002 in your operations
19. Be able to build a security plan to address control gaps (planned end state, costs and benefits, dates, actions and responsibilities)

RePost from 2008 - First jobs

 I was thinking about my first job in security, and was kind of thankful for the opportunity. I was a security guard at a police HQ on the night shift and the criminal investigation branch during the day.


Man, I had some interesting encounters with the general public, well the very sketchy portions of the general public.

It was kind of cool to roll in unmarked police cars on occasion and tote some sort of police ID. The responsibility sort of gave me some direction when I needed it.

Thanks for giving me the opportunity, you know who you are!

DevSecOps

Hey there internet people! It's me your friendly non standard non scary security architect.

I did a discussion at CISO Brisbane the other day which was a bit of a primer on DevSecOps.

So from my notes this is a blog post.

Let's stack some concepts and some history and introduce you to some cool people.

What's DevSecOps?

It's a portmanteau of Developers Security and Operations.  Three types of subject matter experts working together in a holistic manner in a spirit of collaboration and incremental improvement supported by the practice of platform engineering supported by a paved road providing a way for developers to deploy code incrementally at increased velocity in a low risk manner into production.

Why is it so interesting?

Everyone wants to go faster with higher quality and deliver features quicker for better services for their "customers". This is particularly interesting in the area of delivering enterprise software via the Software as a Service business model. If you can't deliver great software quickly you lose market share! It's also interesting in any organisation that provides services electronically to their stakeholders and business partners.

Agile - Perhaps this i where it all kicked off?

Agile development ( you know post it notes, kanban cards, user stories, backlogs and such) broke down the barriers between the business and developers and testers by using "plain english" user stories delivered in two week sprints to ship features that the customers wanted. Rather than taking years to deliver software and when it was delivered it wasn't what the customers wanted via the waterfall approach to software development.

DevOps - This is when it accelerated?

Gene Kim coined the three ways of DevOps being:

  • The First Way: Flow/Systems Thinking
  • The Second Way: Amplify Feedback Loops
  • The Third Way: Culture of Continual Experimentation and Learning

DevOps broke down the barriers between developers and operations. The devs became parts of squads and got put on the pagerduty roster too! Instead of throwing software over the fence, the DevOps squad became accountable and responsible for overall quality of the applications. Concepts such as "observability" and building it into the software so it could be more easily supported were matured.

"security is a quality attribute" - Matthew Hackling 2012

Continuous Integration and Continuous Deployment

CI/CD was a concept that leveraged automation to deliver incremental low risk changes to production, with version control, unit testing, smoke testing, roll back etc.

Infrastructure as Code, Cloud and GitOps

With the wide scale adoption of public cloud by organisations that was API enabled for the deployment of infrastructure. Infrastructure as Code (IaC) became a widespread practice. This was further refined with the practice of GitOps where a source code repo using the Git version management system was utilised to hold the configuration state of infrastructure which was then deployed via the CI/CD pipeline to configure the infrastructure to a known state. There is the possibility of also validating installation to that state and performing "configuration drift management"on an ongoing basis.

"It's a small world unless you have to clean it" Matthew Hackling 2014

The Spotify Model

Now working in agile at scale the spotify model was pioneered. This gave an organisation the way to organise DevOps squads together using concepts such as tribes ( e.g. mobile, desktop etc.) , chapters (e.g. front end engineers, back end engineers) and guilds (e.g. security, test automation etc.)

Netflix and Platform Engineering

Now netflix pioneered the approach of platform engineering from their business unit which is/was called platform engineering and their excellent technical blogs.

The platform engineering approach was to provide squads with a "paved road" with guardrails to provide a way to deploy code into production onto a container to run a microservice etc. etc.

You could go off the paved road on an adventure or an excursion but when you came back you had to share all the learnings to improve the paved road and meet all the requirements of the enterprise paved road. Maybe you discovered a better way through the mountains of software development at scale?

The DevSecOps manifesto.

I think shannon lietz from sony/intuit ( what a legend) crafted this and hosted it on https://www.devsecops.org It's worth a read as it captures the vibe. The vibe is security is going to lean in and get data driven and leverage automation and stop being annoying and be part of the solution.

DevSecOps at scale

Practitioners such as Larry Maccherone pioneers DevSecOps at scale in entertainment and technology giant Comcast and captured a lot of learnings in documents.

DevSecOps in the military and Continuous Authority to Operate

The USAF led an initiative in the US Department of Defence to deliver a "software factory" to enable defense contractors to rapidly iterate software leveraging container technology and security automation without manual paper based process. I do believe there are now warplanes running kubernetes and software developed through this software factory.

Parallel Security Analytics Pipeline

Pioneered by Tanya Janca the concept of a PSAP is to have an independent slow pipeline that runs nightly with all the checks on to inform the platform engineering team about what checks, blocks and solutions to develop and implement in the main "paved road".

DevSecOps in major products/services that enable you to make software

Github, Gitlab and Azure DevOps all now have native pre-engineered devsecops tooling that is provided as "premium licensing" as part of their services. These provide:

  • Software Composition Analysis  (Dependabot coming soon) - big win don't use vulnerable or back doored dependencies to build you apps and avoid Supply Chain attacks
  • Secrets Scanning and blocking - prevent hard coded secrets from getting into your repo - you can assume a threat actor has access to it with the assume breach mantra.
  • Static Application Security Testing (SAST) - run scans for common app sec problems with semgrep or codeQL rules
  • IAC scanning 

Large Language Models

Microsoft has pioneered the use of LLMs to provide security education, code review of the open code in front of you, security coaching, writing static analysis rules in codeql with GitHub CoPilot X.

So here's a few points

DevSecOps isn't:

  1. Running scripts under user credentials with elevated rights against an API, hopefully the right script
  2. Just buying security tooling and putting it in the pipeline
  3. Throwing PDFs or jira cards to developers with unactionable information expecting them to solve it
  4. Tools that run for hours that developers just bypass or turn off to get their work done
  5. RASIC charts and fights over gaps in the operational model
  6. Checklists and spreadsheets
  7. Security is someone else's problem
  8. Painting the sydney harbour bridge with security vulnerabilities - when you think you have finished you have to start again
  9. You are on your own and Goddess help you
What DevSecOps is:
  1. Taking a platform engineering approach to make a paved road- with the easy way being the secure way
  2. Coding out whole classes of vulnerabilities and preventing them coming back
  3. Living the three ways of DevOps with incremental security improvements on the day to day
  4. Developer empathy - don't give a developer a problem without a solution and put the solution in their context
  5. Radical accountability for quality - security is a responsibility of everyone, everyone in the squad cross trains so they can do the basics of development, testing, deployment and production support. 
  6. A "you build it you run it" attitude from the squad
  7. You have support from champions, the security guild and the platform engineering team.





















Gene Kim coined the three 




 

Wednesday, August 25, 2021

What is a Parallel Security Analytics Pipeline

 

This concept was introduced to me by Tanya Janca now of https://wehackpurple.com/ at 2018 at RSA San Francisco.


What do we want to enable our developers with as appsec professionals?

D1. Coaching in the IDE with click to fix guidance for common organisation specific security controls - don't give a dev a problem without a solution

D2. High Signal low noise helpful LOW FRICTION cybersecurity controls in the Continuous Integration and Continuous Deployment pipeline that run fast on every build 

What do we need as appsec professionals in order to make this happen?

A1 - analytics - how big is the code base? have we got coverage of it? Have we got visibility of all the repos ? have we got visibility of the changes in the code base?

A2 - security analytics - what potential security vulnerabilities are in the code base?

A3 - process to triage and work through the backlog of potential vulnerabilities to find the flaws that really make a material impact to the security posture of the application


A solution to this is:

- linting in the IDE for the top 5 to top 10 organisation specific flaws with click to fix remediation

- quality assured organisation specific small accurate checks in the CI/CD pipeline that run every build that run fast in less than 2 minutes. If it's fast high signal and low noise the developers won't need to find a way around it

- a parrallel security analytics pipeline that uses SAST technology/s to scan ALL the code in the repos on a less frequent basis (say nightly or weekly) that takes multiple minutes to hours to run.

- a backlog of triaged potential issues to work through from the parallel security analytics pipeline output to confirm and to build the click to fix linting and main pipeline burn down to block checks


I propose we add onto this concept all the slow but good and full of false positive stuff like:

  • dynamic scanning of APIs for unauthenticated endpoints and IDORs
  • fuzzing


Sunday, May 10, 2020

Repost - are all proxies actually security assets?

Bear with me readers, in this article I posit a controversial viewpoint. That viewpoint is that all existing technologies acting as proxies to transit security zones must be considered a security asset and that the security team needs to get more involved in load balancer design and operations.
What are we doing with proxies these days?
Working in information security and on the technical side of things you often get involved with the use of reverse and forward network proxies to transit security zones often defined by firewall appliances.
A reverse proxy terminates the flow of data from untrusted networks in a semi-trusted network zone, your external De-Militarized Zone and fetches data from a more trusted network zone. Your reverse proxy is meant to have minimal functionality enabled and be security patched up to vendor recommendations. The whole idea behind the reverse proxy is that even if a vulnerability is exploited on it, at least it’s not in the core of your network with access to everything.
A forward proxy fetches information from an untrusted network and returns it to a user or a system in a trusted network zone. Sometimes in large enterprises you will have user proxies and application proxies. The user proxies fetch content from the internet on behalf of the user and work on allowed categories that users are permitted to retrieve content from, whilst the application proxies allow internal systems to integrate only with whitelisted URLs they are allowed to post/get data to/from. The forward proxies simplify firewall rules and routing and help protect users and systems from retrieving malware or “phoning home” to malware command and control systems.
Proxies also should be used to terminate flows of encrypted data so that they can be inspected for malicious payload. With a DMZ located reverse proxy often an external CA issued digital certificate is installed, connecting to an internal CA issued digital certificate on the internal system. Private keys for these internal certificates can be loaded into intrusion detection and prevention systems etc. Sometimes these proxies are chained to Web Application Firewalls. With forward proxies, user computers are configured to trust internal CA issued certificates, which the forward proxy uses to perform TLS inspection.
Other “proxy technology” can include application specific forward DNS servers and NTP servers.
Why do we do this?
Historically we have only had north south network segregation. Hence web server vulnerabilities would result in access to underlying operating system and shared network segments. With a DMZ if a web server is compromised, maybe all web servers end up being owned through lateral “pivoting”, but at least not the database servers.
Often the only reason we are running apache in a DMZ as a reverse proxy is “because security told us” or “that’s how we’ve always done it” or because that is usually how the vendor of the application commonly deploys it.
Developers would love to just run apache and tomcat on the same host, or heck just tomcat. Often Apache acting as a reverse proxy adds no application functionality, except for hosting a “sorry we’re down” static web page during maintenance outages. In many cases the web server in the DMZ is just hosting a “plugin” that fetches content from another web server on the application server.
How are things changing?
Load balancers also known as Application Delivery Controllers are able to perform TLS re-termination and act as reverse proxies. The main point of ingress into network zones is now the load balancer.
Cloud based user forward proxies are becoming more popular as they provide the ability to protect mobile workers with the same surveillance and security policy as on premise users.
Newer versions of TLS are likely to implement Perfect Forward Security (PFS) and the use of static encryption keys will be deprecated.
Slowly but surely East West Network segregation is coming in via the new buzzword in town Software Defined Networking (SDN). With SDN you can have virtual network setups for each key application and allow interactions with other key applications on principle of least privilege. East West segregation essentially turns every application into a “silo” and allows communications between them on principle of least privilege, restricting lateral movement by an attacker in datacentre networks. The days of physical network appliances physically wired to switches, routers and servers are numbered. The security market is moving more and more towards the build of reference models and QA review of deployment scripts which drive the build/secure configuration of application, server and network infrastructure.
Often proxy logs are being mined to identify malware command and control communications, as once the proxy is the sole method of communication with the internet for most users and systems, all malware communications go through it.
So what as a security team should we be doing?
The enterprise security function must take on responsibility for security policy on web servers that perform reverse proxy capabilities.
Enterprise security must take on responsibility for governing security policy implementation on application delivery controllers leveraging all native capabilities available such as TLS configuration, Layer 3 filtering, Layer 7 filtering, content inspection and identity and access management integration.
The security function must retain the responsibility for governance of security policy on forward proxies and tweak the policy to restrict the download of “dangerous file types” only from trusted web sites (e.g. only allow .exe downloads from trusted vendor websites) and look seriously at implementing user behaviour driven anomaly detection as well as sandboxing to detect malicious PDFs etc.
The security function must work with network architecture to see if the functions of tier 1 firewall, forward proxy and web server can be collapsed. Perhaps this can be accomplished with load balancers to simplify the network topology and allow us to deploy security policy in a single place? If a virtual load balancer can deliver secure layer 3 stateful firewalling capability, do we even need tier 1 firewalls in front of them?
The security function must plan to support newer versions of TLS which may implement perfect forward security whilst maintaining the ability to inspect web content coming into our DMZ.
Next Steps
Here’s a few suggestions I encourage you to take:
  • Inventory the proxy technologies in your organisation and what SSL/TLS inspection is being performed.
  • Investigate the native and optional security capabilities of load balancers, whether they are hardware appliances, virtual appliances or Amazon Elastic Load Balancers.
  • Develop a strategy/roadmap for consolidation/simplification of reverse proxy capabilities and addressing the support of future versions of TLS with mandatory perfect forward security.
  • Investigate the capabilities of your existing forward proxies and whether you are making the most of that investment.

Repost - ​Mossack Fonseca - Insider Threat - What would you do?

So with all the press related to the Panama Papers I began thinking again about insider threat. So here is a quick list of suggested actions specifically to tackle data leakage/whistleblowing/insider threat. This is a particularly difficult challenge in information security as you often need to provide access to all customer records to the lowest level of employees within the organisation to facilitate timely customer service processes.
  1. Engage an organisation to provide an independent whistleblower call center and encrypted contact form service with investigation support for the organisation to provide employees with an alternative to going to the press in case of middle and even senior management misconduct. This is a fail safe measure to prevent corporate data sets being exfiltrated to the press by well meaning if misguided employees. This also provides an increased ability for prosecution of insider malicious actors who may claim whistleblower protections as legal cover for a failed data theft/sale.
  2. Identify the most sensitive information in the organisation and the systems in which it resides. Check that access to this information is authenticated and logged ie. access to the content not just authentication success/failure.
  3. Investigate to see if there is an easily identifiable identifier for each customer record. Investigate its construction. Even consider modifying its construction so it is based on an algorithm that can easily be checked in a Data Leakage Prevention system signature to minimise false positives.
  4. Block unapproved file sharing, webmail and uncategorised websites in the corporate web proxy policy.
  5. Provide an approved file transfer capability for ad-hoc file sharingwith business partners
  6. Block USB storage device usage. Perhaps only allow the use of corporate issued encrypted USBs for the required edge use cases which enforce centralised logging of file activity.
  7. Implement TLS inspection of web traffic and Data Leakage Prevention (DLP) on endpoint, web and email traffic including coverage of the approved file transfer capability (while you are at it ensure opportunistic TLS support in email gateways is enabled for data in transit email protection with your business partners)
  8. Block the use of encrypted file attachments in outbound email in favour of the approved file transfer capability
  9. Implement a network surveillance system with TLS inspection, alert, traffic replay and alert suppression whitelisting capabilities
  10. Integrate DLP and network surveillance integrated into a workflowed case management system supported by a well resourced internal investigation and incident response function
  11. Insert honeytoken records into each of the sensitive customer data repositories so that when they are accessed across a network, the network surveillance generates alerts for mandatory investigation.
  12. Tune out the false positives from honeytoken alerts from regular batch file transfers between systems
  13. Revisit all of the customer data repositories and ensure that only a subset of users are authorised to access file export capabilities
  14. For key systems implement a privileged access management solution with surveillance of administrative access and workflowed tight integration with change and incident management approval for facilitation of timeboxed privileged access
Hope that gives you an insight into the complexities of tackling data leakage and insider threat. There are another two levels of detail under this plan required to execute this successfully through requirements, procurement, design build and run.
As always I am welcome to queries from fellow security professionals and interested executives.