Bear with me readers, in this article I posit a controversial viewpoint. That viewpoint is that all existing technologies acting as proxies to transit security zones must be considered a security asset and that the security team needs to get more involved in load balancer design and operations.
What are we doing with proxies these days?
Working in information security and on the technical side of things you often get involved with the use of reverse and forward network proxies to transit security zones often defined by firewall appliances.
A reverse proxy terminates the flow of data from untrusted networks in a semi-trusted network zone, your external De-Militarized Zone and fetches data from a more trusted network zone. Your reverse proxy is meant to have minimal functionality enabled and be security patched up to vendor recommendations. The whole idea behind the reverse proxy is that even if a vulnerability is exploited on it, at least it’s not in the core of your network with access to everything.
A forward proxy fetches information from an untrusted network and returns it to a user or a system in a trusted network zone. Sometimes in large enterprises you will have user proxies and application proxies. The user proxies fetch content from the internet on behalf of the user and work on allowed categories that users are permitted to retrieve content from, whilst the application proxies allow internal systems to integrate only with whitelisted URLs they are allowed to post/get data to/from. The forward proxies simplify firewall rules and routing and help protect users and systems from retrieving malware or “phoning home” to malware command and control systems.
Proxies also should be used to terminate flows of encrypted data so that they can be inspected for malicious payload. With a DMZ located reverse proxy often an external CA issued digital certificate is installed, connecting to an internal CA issued digital certificate on the internal system. Private keys for these internal certificates can be loaded into intrusion detection and prevention systems etc. Sometimes these proxies are chained to Web Application Firewalls. With forward proxies, user computers are configured to trust internal CA issued certificates, which the forward proxy uses to perform TLS inspection.
Other “proxy technology” can include application specific forward DNS servers and NTP servers.
Why do we do this?
Historically we have only had north south network segregation. Hence web server vulnerabilities would result in access to underlying operating system and shared network segments. With a DMZ if a web server is compromised, maybe all web servers end up being owned through lateral “pivoting”, but at least not the database servers.
Often the only reason we are running apache in a DMZ as a reverse proxy is “because security told us” or “that’s how we’ve always done it” or because that is usually how the vendor of the application commonly deploys it.
Developers would love to just run apache and tomcat on the same host, or heck just tomcat. Often Apache acting as a reverse proxy adds no application functionality, except for hosting a “sorry we’re down” static web page during maintenance outages. In many cases the web server in the DMZ is just hosting a “plugin” that fetches content from another web server on the application server.
How are things changing?
Load balancers also known as Application Delivery Controllers are able to perform TLS re-termination and act as reverse proxies. The main point of ingress into network zones is now the load balancer.
Cloud based user forward proxies are becoming more popular as they provide the ability to protect mobile workers with the same surveillance and security policy as on premise users.
Newer versions of TLS are likely to implement Perfect Forward Security (PFS) and the use of static encryption keys will be deprecated.
Slowly but surely East West Network segregation is coming in via the new buzzword in town Software Defined Networking (SDN). With SDN you can have virtual network setups for each key application and allow interactions with other key applications on principle of least privilege. East West segregation essentially turns every application into a “silo” and allows communications between them on principle of least privilege, restricting lateral movement by an attacker in datacentre networks. The days of physical network appliances physically wired to switches, routers and servers are numbered. The security market is moving more and more towards the build of reference models and QA review of deployment scripts which drive the build/secure configuration of application, server and network infrastructure.
Often proxy logs are being mined to identify malware command and control communications, as once the proxy is the sole method of communication with the internet for most users and systems, all malware communications go through it.
So what as a security team should we be doing?
The enterprise security function must take on responsibility for security policy on web servers that perform reverse proxy capabilities.
Enterprise security must take on responsibility for governing security policy implementation on application delivery controllers leveraging all native capabilities available such as TLS configuration, Layer 3 filtering, Layer 7 filtering, content inspection and identity and access management integration.
The security function must retain the responsibility for governance of security policy on forward proxies and tweak the policy to restrict the download of “dangerous file types” only from trusted web sites (e.g. only allow .exe downloads from trusted vendor websites) and look seriously at implementing user behaviour driven anomaly detection as well as sandboxing to detect malicious PDFs etc.
The security function must work with network architecture to see if the functions of tier 1 firewall, forward proxy and web server can be collapsed. Perhaps this can be accomplished with load balancers to simplify the network topology and allow us to deploy security policy in a single place? If a virtual load balancer can deliver secure layer 3 stateful firewalling capability, do we even need tier 1 firewalls in front of them?
The security function must plan to support newer versions of TLS which may implement perfect forward security whilst maintaining the ability to inspect web content coming into our DMZ.
Here’s a few suggestions I encourage you to take:
Inventory the proxy technologies in your organisation and what SSL/TLS inspection is being performed.
Investigate the native and optional security capabilities of load balancers, whether they are hardware appliances, virtual appliances or Amazon Elastic Load Balancers.
Develop a strategy/roadmap for consolidation/simplification of reverse proxy capabilities and addressing the support of future versions of TLS with mandatory perfect forward security.
Investigate the capabilities of your existing forward proxies and whether you are making the most of that investment.
So with all the press related to the Panama Papers I began thinking again about insider threat. So here is a quick list of suggested actions specifically to tackle data leakage/whistleblowing/insider threat. This is a particularly difficult challenge in information security as you often need to provide access to all customer records to the lowest level of employees within the organisation to facilitate timely customer service processes.
Engage an organisation to provide an independent whistleblower call center and encrypted contact form service with investigation support for the organisation to provide employees with an alternative to going to the press in case of middle and even senior management misconduct. This is a fail safe measure to prevent corporate data sets being exfiltrated to the press by well meaning if misguided employees. This also provides an increased ability for prosecution of insider malicious actors who may claim whistleblower protections as legal cover for a failed data theft/sale.
Identify the most sensitive information in the organisation and the systems in which it resides. Check that access to this information is authenticated and logged ie. access to the content not just authentication success/failure.
Investigate to see if there is an easily identifiable identifier for each customer record. Investigate its construction. Even consider modifying its construction so it is based on an algorithm that can easily be checked in a Data Leakage Prevention system signature to minimise false positives.
Block unapproved file sharing, webmail and uncategorised websites in the corporate web proxy policy.
Provide an approved file transfer capability for ad-hoc file sharingwith business partners
Block USB storage device usage. Perhaps only allow the use of corporate issued encrypted USBs for the required edge use cases which enforce centralised logging of file activity.
Implement TLS inspection of web traffic and Data Leakage Prevention (DLP) on endpoint, web and email traffic including coverage of the approved file transfer capability (while you are at it ensure opportunistic TLS support in email gateways is enabled for data in transit email protection with your business partners)
Block the use of encrypted file attachments in outbound email in favour of the approved file transfer capability
Implement a network surveillance system with TLS inspection, alert, traffic replay and alert suppression whitelisting capabilities
Integrate DLP and network surveillance integrated into a workflowed case management system supported by a well resourced internal investigation and incident response function
Insert honeytoken records into each of the sensitive customer data repositories so that when they are accessed across a network, the network surveillance generates alerts for mandatory investigation.
Tune out the false positives from honeytoken alerts from regular batch file transfers between systems
Revisit all of the customer data repositories and ensure that only a subset of users are authorised to access file export capabilities
For key systems implement a privileged access management solution with surveillance of administrative access and workflowed tight integration with change and incident management approval for facilitation of timeboxed privileged access
Hope that gives you an insight into the complexities of tackling data leakage and insider threat. There are another two levels of detail under this plan required to execute this successfully through requirements, procurement, design build and run.
As always I am welcome to queries from fellow security professionals and interested executives.
Over the years, yours truly has been heavily involved with the evolution of the modern “comuter security” function in a number of organisations. I thought it might benefit readers to receive a brief history lesson, take a current pulse check and look forward to the future of the evolution of the information security function.
In the 1990s the security function in an organisation was called “IT Security” and looked a little like this:
The team had an asset centric focus, with a focus on deploying, maintaining and managing protective technologies including anti-virus software, web content management, firewalls and intrusion detection systems.
Security products were typically ‘bolted on’ to technology environments to help mitigate underlying deficiencies and IT security teams were always looking for ‘silver bullets’ from security vendors to resolve security concerns or at least point at when a security breach occurred with a “hey we tried look we bought a widget that said it fixed that”
IT security’s role involved saying ‘no’ to projects with little explanation or justification mostly because personnel had no idea of how to comply with policy.
Organisations relied on a perimeter based approach to network security, with these perimeters typically being well defined due to physical security
Security reported to the CIO for budget and the CIO often declined security team requests with a lack of budget being a number one concern for “security managers”
The modern information security practice of the early 2000s looks more like this:
The focus is on data and a risk driven approach to securing it wherever it may be.
A key activity is securing the expanded enterprise with corporate and customer data held by service providers
Many organisations are successfully preventing accidental data leakage by employees with signature based technology such as Data Leakage Prevention (DLP)
Security is focused on saying yes to project teams and the business and making sure key building blocks are available for consumption such as typical deployment patterns and reference architectures with security built in with a “look here’s one we prepared earlier” attitude.
The network perimeter has been expanded to encompass corporate mobile devices and corporate wireless systems
More security controls are delivered is through the upper layers of the Open System Interconnect model with Web Application Firewall and Identity and Access Management products are being use to secure access to applications and databases
Security controls are embedded in infrastructure builds through security engagement with SOE builds.
Security is built in via security standards and process as part of the Software Development Life Cycle (SDLC) focusing on mitigating vulnerabilities
The security team reports to the Board, keeps the CIO honest and often is well funded due to executive level concerns about brand damage and service interruption due to well publicized security breaches.
The future of the information security function might look like this:
Increased service provider focus due to “cloud” focus with service provider selection and evaluation for a defined set of use cases for hosting as a key competency
Security playing a key role in driving corporate strategy and providing constructive criticism in the area of cloud adoption especially in the fields of service orchestration and business continuity
Extensive use of personal devices for corporate activities (phones. tablets, home PCs) for teleworking/mobileworking with a selection of security controls that balance security with user experience
Widespread use of security services provided via the cloud such as: threat intelligence, web proxy and malware detection
Full charge back models being deployed for use of security services direct to business units on a per employee or per customer basis, making the security function self funding.
Malicious data leakage detection undertaken through extensive network and system data analytics
Building and selecting secure systems through SDLC and procurement processes, with a focus on consistent key security control implementation
Software defined networking and full virtualisation of the network
Horizontal service based network segregation rather than vertical tiers
Enabling developers to self assess the security of their code, libraries and dependencies to facilitate containerisation adoption, reduce time to market and eliminate environment inconsistencies.
Security perhaps reports back to CIO whose role has evolved from IT asset management responsibility to data and service provider governance with alignment rather than opposition to information security objectives.
It’s always wonderful to start a new year. A new year brings a fresh perspective and renewed enthusiasm. So what do I think twenty-fifteen will bring us?
More breaches! No organisation’s security is perfect, security breaches, data theft and public data disclosure will continue. Generally, in the private sector you just have to “target harden” enough until security becomes a competitive advantage instead a liability to your executive’s tenure.
More badly written regulation. Unfortunately many regulators write security and privacy regulations and legislation with no alignment to the ISO 27K series of standards, the bible for information security and the basis for many Information Security Management Systems. For goodness sake regulators, please consult Wikipedia before you pick up your pens, or engage professionals! At least you could include mapping to the relevant ISO standard, and align the standard statements in them to your standard statements. Help us ease the compliance burden if you must re-write the bible! If you feel the need to elaborate on them, participate in the standards committees!
DevSecOps. The cloud enabled approach of version controlling your infrastructure deployments via automation scripts will continue. Securely configured Amazon Machine Images (AMIs) are now available and organisations will more widely start to deploy file system permissions and application software secure configurations along with binaries and compiled code as part of automated deployments.
Agile Security. Security governance, architecture and testing need to revisit their core functions and re-invent themselves to enable agile development rather than hinder it. This may mean firewall rule requirements gathering as part of design for epics, abuse cases written up as part of user stories, static analysis as part of development frameworks, developer initiated dynamic analysis, dedicated security testing and code review resources for core enterprise applications etc.
Mergers and Acquisitions in product land. As anti-malware becomes less effective, I suspect we will see “the 200 pound gorillas” acquire smaller more agile security companies with more advanced malware protection technologies. I would not be surprised if we saw main stream web content management systems enabled with technologies that detect malware command and control communications, or email content systems that dynamically quarantine files with suspected malicious content identified on a site via sandbox based analysis—rather than by MD5 hash designated as malicious by an overworked analyst in a data entry environment after someone submits a malware sample.
As always your comments are welcome below and please consider following me on twitter for more irreverent commentary!
Why should a CSO care about the government's mandatory data retention scheme? It’s your customers’ metadata. It’s your company’s metadata.
1. The Australian government is essentially treating all Australian citizens as suspects and proactively issuing a nation-wide data preservation notice. Previously these were only issued on persons of interest. Everyone's getting wiretapped not just the suspected criminals.
2. Metadata can be accessed without a warrant by a subset of government agencies defined as "enforcement agencies" in the act. Enforcement agencies include the ATO, Centrelink, the RSPCA and local councils as well as police and ASIO.
2. Many organisations being asked to retain data (ISPs and Telcos) are not geared up to do this. These requirements will impact personnel, process and technology, not to mention the security requirements. ISPs and Telcos are currently only geared up for "law enforcement intercept" to facilitate the wiretapping of persons of interest.
3. The definition of metadata is vague and is likely to expand to meet the "mission". For example, if Senator Brandis wants to find out who has been watching a certain extremist propaganda video on LiveLeak he'll need the subscriber details, source IP and destination URL. Therefore, the government will need to retain (at least) all users’ URL history if they want to achieve this objective. Location data is currently excluded, but this probably means GPS locations, not the physical addresses of service subscribers. If new technologies emerge like peer to peer , the definition is likely to expand or be "legally re-interpreted", perhaps in secret like the US has done in the past.
Do not underestimate the importance and sensitivity of metadata. At a minimum it provides enough information to blackmail someone. The former director of the NSA Michael Hayden places metadata in the context of military use with this quote "We kill people based on metadata". Mapping sets of data about "persons of interest" together, is a point and click activity these days for the software used by intelligence agencies such as Palantir. If whistle-blowers, journalists, activists and politicians become persons of interest there is a threat to the free press, correct operation of our democracy and the independence of our legislators from undue influence by our intelligence agencies.
4. If extremists or paedophiles look on the Internet for twenty minutes they will find useful methods for encrypting, proxying and hiding their communications that still necessitate intelligence agencies compromise the endpoint. Wiretapping will still need to happen for the bad guys.
5. Customers will demand SSL encryption on all corporate web applications to protect their privacy leading to changes of infrastructure and increased costs.
6. To maintain access to the data due to "the scourge of encryption" the government could mandate ISPs to intercept SSL traffic.
7. The existing website blacklist could be expanded in its use. Expect something like YouTube to be accidentally blocked. It has already happened to 250,000 websites following an ASIC request to takedown a scam website.
8. The data retained by these ISP systems and the access made available to it will be unprecedented and poorly secured. It will become a target for criminals, private investigators and debt collectors. There will be incidents of misuse by administrators, including resale of the data. Even nation state threat actors will want this access to these juicy targets.
9. ISPs and telcos will get hacked and the government will blame them for poor security.
In summary Senator Brandis should tell the AFP and ASIO to go get specific warrants and go after the terrorists with justified targeted surveillance, not a society-wide "fishing expedition".
he recent compromise of icloud backups of celebrities has piqued interest in the security of consumer cloud services. To paraphrase Del Harvey from twitter, when you have a million events a day, a one in a million event happens once a day.
Below are a few of my thoughts; securing a commodity cloud service requires a lot of disciplined thinking:
1. If you run a mass market cloud service you need to do some serious threat modelling, including:
Consider your users. Not all users are the same. For example, human rights activists and celebrities are at risk of targeted attacks. You will need to categorise your users in enough granularity to apply security controls matching the threats. For example, using birthdates to perform password resets may not be as effective for celebrities.
Consider their information assets.
Consider threat actors. For example cybercriminals, nation state actors, abusive ex-husbands, garden variety "script kiddies".
2. You should then select appropriate security controls for the threats identified, perhaps even using structured thinking like attack trees or "cyber kill chain" to pick the most effective.
3. You need to test the controls. This includes functional, user experience and penetration testing.
4. You need to have the process and people to be able to respond to security incidents including reports of vulnerabilities as well as breaches large and small.
5. Should consumers be able to opt in for increased security controls, the application of which is arbitrated by the cloud services provider? For example, twitter has a "verified" option for public figures to prevent hoax accounts.
6. Some organisations can and will opt out of commodity cloud services and instead put in bespoke solutions. Instead of Twitter, many companies use Yammer.
7. Large organisations can control the use of cloud services in many ways.
A mobile device management solution that uses the iOS API can disable the use of icloud.
For example a web proxy can be configured to block or monitor the use of commodity cloud services like Gmail and Dropbox.
Once there were mainframes that were standalone systems, fed by punch cards and teletypewriters. They had tight roles, based on access control models, often externalised to the operating system and application.
Everyone wanted access to them so the teletypewriters were extended with serial connections and then modems to allow remote access.
Eventually minicomputers were connected to public networks the precursors to the internet and network services were written and exposed like send mail.
One of the first network worms, ironically released by the son of a computer security researcher knocked a good portion of the internet offline.
Now the first bolt on security product was released - the firewall.
Essentially it was a clever kludge, to the problem of too many publicly accessible computers with default installs that had insecure network services to manage.
Now personal computers blossomed also with the rise of personal productivity software and a thriving shareware culture.
Some clever idiots started mixing in malicious software with legitimate software and we had the rise of computer viruses - malicious software that required user interaction to replicate.
The second bolt on security product was born - antivirus.
Antivirus checked files you opened against a database of known malicious software. This again was just a kludge for the problems of poor user awareness, users running with administrative privileges due to undercooked operating systems and a lack of a mechanism for easily identifying if software was trustworthy before executing it.
Attackers started digging into the network services for vulnerabilities as default passwords and debug functions started getting turned off, and found a soft underbelly in web server software. Web site defacements rose and another product arose - Intrusion Detection - essentially a networked version of antivirus looking at packets rather than files.
This helped operations teams get a bit ahead of the game and respond to compromises of internet facing services in a, timely manner.
Big outages, due to network worms on internal networks affecting the dominant server and desktop operating system, drove Microsoft's boss Bill Gates to issue the Trustworthy Computing memo, essentially telling the company that security needed to be a top priority for the success of the company.
Microsoft started turning the supertanker in the right direction, by introducing security into their SDLC, assisting law enforcement, pushing security fixes via a security bulletin process.
The company also delivered operating systems in which there wasn’t excessive network services running as part of default installs. The user ran under reduced privileges, had cryptographic signing of operating system components and started to address the root causes of poor operating system security.
But now the threat environment had changed, the threat was no longer computer enthusiasts "hackers" being a bit too curious or "crackers" being too destructive, it was starting to become organised criminals.
Criminals figured out that serious money was starting to move through computer systems, and that malware called remote access Trojans, could help them steal credit card numbers and internet banking credentials facilitating fraud.
Now finally spies got on the Internet too, as the majority of the world's information got stored in computer systems.
So here we are at a pretty interesting time in information security. The latest operating systems have got more easy to manage security wise and more security is "built in", but most organisations aren't running them yet.
The threat landscape has changed, with the attackers motivated by financial gain.
Vulnerabilities are no longer being publicly disclosed, but instead sold to the highest bidder.
Often it's the application software like Java and Flash which is now being targeted on the desktop.
Information security professionals now have to worry about nation state backed threat actors as well as organised crime backed cyber criminals.
The bolt on security controls have become less effective, as the threat actors have learned to hide from them, or tunnel through them.
Now we have to chase emerging technologies to stay ahead of the threat actors, as traditional security vendors haven't innovated quickly enough.
Additionally information technology is again re-inventing itself with "cloud" and "mobility" "BYOD" "flexible working" "off shoring" and a handful of other disruptive ideas.
Just as the better operating systems are arriving we are swapping out windows desktops connected too wired networks, with laptops connected to wireless networks.
Executives demand email and apps on their iPads and iPhones, introducing new operating systems and new ways of managing them.
Businesses want "instant on" software as a service applications rather than taking the risk to develop or deploy in house solutions.
It's a rapidly changing battlefield in terms of threat landscape and the availability and effectiveness of security controls.
Information security is having to step up a level and think differently about enabling users and also third parties to secure themselves and securing the data we share with them.
As we lose the ability to enforce security controls ourselves at the operating system layer.
Infosec - never a dull moment if you're doing it right.