Sunday, May 10, 2020

Repost - are all proxies actually security assets?

Bear with me readers, in this article I posit a controversial viewpoint. That viewpoint is that all existing technologies acting as proxies to transit security zones must be considered a security asset and that the security team needs to get more involved in load balancer design and operations.
What are we doing with proxies these days?
Working in information security and on the technical side of things you often get involved with the use of reverse and forward network proxies to transit security zones often defined by firewall appliances.
A reverse proxy terminates the flow of data from untrusted networks in a semi-trusted network zone, your external De-Militarized Zone and fetches data from a more trusted network zone. Your reverse proxy is meant to have minimal functionality enabled and be security patched up to vendor recommendations. The whole idea behind the reverse proxy is that even if a vulnerability is exploited on it, at least it’s not in the core of your network with access to everything.
A forward proxy fetches information from an untrusted network and returns it to a user or a system in a trusted network zone. Sometimes in large enterprises you will have user proxies and application proxies. The user proxies fetch content from the internet on behalf of the user and work on allowed categories that users are permitted to retrieve content from, whilst the application proxies allow internal systems to integrate only with whitelisted URLs they are allowed to post/get data to/from. The forward proxies simplify firewall rules and routing and help protect users and systems from retrieving malware or “phoning home” to malware command and control systems.
Proxies also should be used to terminate flows of encrypted data so that they can be inspected for malicious payload. With a DMZ located reverse proxy often an external CA issued digital certificate is installed, connecting to an internal CA issued digital certificate on the internal system. Private keys for these internal certificates can be loaded into intrusion detection and prevention systems etc. Sometimes these proxies are chained to Web Application Firewalls. With forward proxies, user computers are configured to trust internal CA issued certificates, which the forward proxy uses to perform TLS inspection.
Other “proxy technology” can include application specific forward DNS servers and NTP servers.
Why do we do this?
Historically we have only had north south network segregation. Hence web server vulnerabilities would result in access to underlying operating system and shared network segments. With a DMZ if a web server is compromised, maybe all web servers end up being owned through lateral “pivoting”, but at least not the database servers.
Often the only reason we are running apache in a DMZ as a reverse proxy is “because security told us” or “that’s how we’ve always done it” or because that is usually how the vendor of the application commonly deploys it.
Developers would love to just run apache and tomcat on the same host, or heck just tomcat. Often Apache acting as a reverse proxy adds no application functionality, except for hosting a “sorry we’re down” static web page during maintenance outages. In many cases the web server in the DMZ is just hosting a “plugin” that fetches content from another web server on the application server.
How are things changing?
Load balancers also known as Application Delivery Controllers are able to perform TLS re-termination and act as reverse proxies. The main point of ingress into network zones is now the load balancer.
Cloud based user forward proxies are becoming more popular as they provide the ability to protect mobile workers with the same surveillance and security policy as on premise users.
Newer versions of TLS are likely to implement Perfect Forward Security (PFS) and the use of static encryption keys will be deprecated.
Slowly but surely East West Network segregation is coming in via the new buzzword in town Software Defined Networking (SDN). With SDN you can have virtual network setups for each key application and allow interactions with other key applications on principle of least privilege. East West segregation essentially turns every application into a “silo” and allows communications between them on principle of least privilege, restricting lateral movement by an attacker in datacentre networks. The days of physical network appliances physically wired to switches, routers and servers are numbered. The security market is moving more and more towards the build of reference models and QA review of deployment scripts which drive the build/secure configuration of application, server and network infrastructure.
Often proxy logs are being mined to identify malware command and control communications, as once the proxy is the sole method of communication with the internet for most users and systems, all malware communications go through it.
So what as a security team should we be doing?
The enterprise security function must take on responsibility for security policy on web servers that perform reverse proxy capabilities.
Enterprise security must take on responsibility for governing security policy implementation on application delivery controllers leveraging all native capabilities available such as TLS configuration, Layer 3 filtering, Layer 7 filtering, content inspection and identity and access management integration.
The security function must retain the responsibility for governance of security policy on forward proxies and tweak the policy to restrict the download of “dangerous file types” only from trusted web sites (e.g. only allow .exe downloads from trusted vendor websites) and look seriously at implementing user behaviour driven anomaly detection as well as sandboxing to detect malicious PDFs etc.
The security function must work with network architecture to see if the functions of tier 1 firewall, forward proxy and web server can be collapsed. Perhaps this can be accomplished with load balancers to simplify the network topology and allow us to deploy security policy in a single place? If a virtual load balancer can deliver secure layer 3 stateful firewalling capability, do we even need tier 1 firewalls in front of them?
The security function must plan to support newer versions of TLS which may implement perfect forward security whilst maintaining the ability to inspect web content coming into our DMZ.
Next Steps
Here’s a few suggestions I encourage you to take:
  • Inventory the proxy technologies in your organisation and what SSL/TLS inspection is being performed.
  • Investigate the native and optional security capabilities of load balancers, whether they are hardware appliances, virtual appliances or Amazon Elastic Load Balancers.
  • Develop a strategy/roadmap for consolidation/simplification of reverse proxy capabilities and addressing the support of future versions of TLS with mandatory perfect forward security.
  • Investigate the capabilities of your existing forward proxies and whether you are making the most of that investment.

Repost - ​Mossack Fonseca - Insider Threat - What would you do?

So with all the press related to the Panama Papers I began thinking again about insider threat. So here is a quick list of suggested actions specifically to tackle data leakage/whistleblowing/insider threat. This is a particularly difficult challenge in information security as you often need to provide access to all customer records to the lowest level of employees within the organisation to facilitate timely customer service processes.
  1. Engage an organisation to provide an independent whistleblower call center and encrypted contact form service with investigation support for the organisation to provide employees with an alternative to going to the press in case of middle and even senior management misconduct. This is a fail safe measure to prevent corporate data sets being exfiltrated to the press by well meaning if misguided employees. This also provides an increased ability for prosecution of insider malicious actors who may claim whistleblower protections as legal cover for a failed data theft/sale.
  2. Identify the most sensitive information in the organisation and the systems in which it resides. Check that access to this information is authenticated and logged ie. access to the content not just authentication success/failure.
  3. Investigate to see if there is an easily identifiable identifier for each customer record. Investigate its construction. Even consider modifying its construction so it is based on an algorithm that can easily be checked in a Data Leakage Prevention system signature to minimise false positives.
  4. Block unapproved file sharing, webmail and uncategorised websites in the corporate web proxy policy.
  5. Provide an approved file transfer capability for ad-hoc file sharingwith business partners
  6. Block USB storage device usage. Perhaps only allow the use of corporate issued encrypted USBs for the required edge use cases which enforce centralised logging of file activity.
  7. Implement TLS inspection of web traffic and Data Leakage Prevention (DLP) on endpoint, web and email traffic including coverage of the approved file transfer capability (while you are at it ensure opportunistic TLS support in email gateways is enabled for data in transit email protection with your business partners)
  8. Block the use of encrypted file attachments in outbound email in favour of the approved file transfer capability
  9. Implement a network surveillance system with TLS inspection, alert, traffic replay and alert suppression whitelisting capabilities
  10. Integrate DLP and network surveillance integrated into a workflowed case management system supported by a well resourced internal investigation and incident response function
  11. Insert honeytoken records into each of the sensitive customer data repositories so that when they are accessed across a network, the network surveillance generates alerts for mandatory investigation.
  12. Tune out the false positives from honeytoken alerts from regular batch file transfers between systems
  13. Revisit all of the customer data repositories and ensure that only a subset of users are authorised to access file export capabilities
  14. For key systems implement a privileged access management solution with surveillance of administrative access and workflowed tight integration with change and incident management approval for facilitation of timeboxed privileged access
Hope that gives you an insight into the complexities of tackling data leakage and insider threat. There are another two levels of detail under this plan required to execute this successfully through requirements, procurement, design build and run.
As always I am welcome to queries from fellow security professionals and interested executives.

Repost - From IT security to information security and beyond...

Over the years, yours truly has been heavily involved with the evolution of the modern “comuter security” function in a number of organisations. I thought it might benefit readers to receive a brief history lesson, take a current pulse check and look forward to the future of the evolution of the information security function.
In the 1990s the security function in an organisation was called “IT Security” and looked a little like this:
  • The team had an asset centric focus, with a focus on deploying, maintaining and managing protective technologies including anti-virus software, web content management, firewalls and intrusion detection systems.
  • Security products were typically ‘bolted on’ to technology environments to help mitigate underlying deficiencies and IT security teams were always looking for ‘silver bullets’ from security vendors to resolve security concerns or at least point at when a security breach occurred with a “hey we tried look we bought a widget that said it fixed that”
  • IT security’s role involved saying ‘no’ to projects with little explanation or justification mostly because personnel had no idea of how to comply with policy.
  • Organisations relied on a perimeter based approach to network security, with these perimeters typically being well defined due to physical security
  • Security reported to the CIO for budget and the CIO often declined security team requests with a lack of budget being a number one concern for “security managers”
The modern information security practice of the early 2000s looks more like this:
  • The focus is on data and a risk driven approach to securing it wherever it may be.
  • A key activity is securing the expanded enterprise with corporate and customer data held by service providers 
  • Many organisations are successfully preventing accidental data leakage by employees with signature based technology such as Data Leakage Prevention (DLP)
  • Security is focused on saying yes to project teams and the business and making sure key building blocks are available for consumption such as typical deployment patterns and reference architectures with security built in with a “look here’s one we prepared earlier” attitude.
  • The network perimeter has been expanded to encompass corporate mobile devices and corporate wireless systems
  • More security controls are delivered is through the upper layers of the Open System Interconnect model with Web Application Firewall and Identity and Access Management products are being use to secure access to applications and databases
  • Security controls are embedded in infrastructure builds through security engagement with SOE builds.
  • Security is built in via security standards and process as part of the Software Development Life Cycle (SDLC) focusing on mitigating vulnerabilities
  • The security team reports to the Board, keeps the CIO honest and often is well funded due to executive level concerns about brand damage and service interruption due to well publicized security breaches.
The future of the information security function might look like this:
  • Increased service provider focus due to “cloud” focus with service provider selection and evaluation for a defined set of use cases for hosting as a key competency
  • Security playing a key role in driving corporate strategy and providing constructive criticism in the area of cloud adoption especially in the fields of service orchestration and business continuity
  • Extensive use of personal devices for corporate activities (phones. tablets, home PCs) for teleworking/mobileworking with a selection of security controls that balance security with user experience
  • Widespread use of security services provided via the cloud such as: threat intelligence, web proxy and malware detection
  • Full charge back models being deployed for use of security services direct to business units on a per employee or per customer basis, making the security function self funding.
  • Malicious data leakage detection undertaken through extensive network and system data analytics
  • Building and selecting secure systems through SDLC and procurement processes, with a focus on consistent key security control implementation
  • Software defined networking and full virtualisation of the network
  • Horizontal service based network segregation rather than vertical tiers
  • Enabling developers to self assess the security of their code, libraries and dependencies to facilitate containerisation adoption, reduce time to market and eliminate environment inconsistencies.
  • Security perhaps reports back to CIO whose role has evolved from IT asset management responsibility to data and service provider governance with alignment rather than opposition to information security objectives.

Repost - what will 2015 bring to infosec

It’s always wonderful to start a new year. A new year brings a fresh perspective and renewed enthusiasm. So what do I think twenty-fifteen will bring us?
More breaches! No organisation’s security is perfect, security breaches, data theft and public data disclosure will continue. Generally, in the private sector you just have to “target harden” enough until security becomes a competitive advantage instead a liability to your executive’s tenure.
More badly written regulation. Unfortunately many regulators write security and privacy regulations and legislation with no alignment to the ISO 27K series of standards, the bible for information security and the basis for many Information Security Management Systems. For goodness sake regulators, please consult Wikipedia before you pick up your pens, or engage professionals! At least you could include mapping to the relevant ISO standard, and align the standard statements in them to your standard statements. Help us ease the compliance burden if you must re-write the bible! If you feel the need to elaborate on them, participate in the standards committees!
DevSecOps. The cloud enabled approach of version controlling your infrastructure deployments via automation scripts will continue. Securely configured Amazon Machine Images (AMIs) are now available and organisations will more widely start to deploy file system permissions and application software secure configurations along with binaries and compiled code as part of automated deployments.
Agile Security. Security governance, architecture and testing need to revisit their core functions and re-invent themselves to enable agile development rather than hinder it. This may mean firewall rule requirements gathering as part of design for epics, abuse cases written up as part of user stories, static analysis as part of development frameworks, developer initiated dynamic analysis, dedicated security testing and code review resources for core enterprise applications etc.
Mergers and Acquisitions in product land. As anti-malware becomes less effective, I suspect we will see “the 200 pound gorillas” acquire smaller more agile security companies with more advanced malware protection technologies. I would not be surprised if we saw main stream web content management systems enabled with technologies that detect malware command and control communications, or email content systems that dynamically quarantine files with suspected malicious content identified on a site via sandbox based analysis—rather than by MD5 hash designated as malicious by an overworked analyst in a data entry environment after someone submits a malware sample.
As always your comments are welcome below and please consider following me on twitter for more irreverent commentary!

Repost - Citizens not Suspects - Notes on Mandatory Data Retention

Why should a CSO care about the government's mandatory data retention scheme? It’s your customers’ metadata. It’s your company’s metadata.
1. The Australian government is essentially treating all Australian citizens as suspects and proactively issuing a nation-wide data preservation notice. Previously these were only issued on persons of interest. Everyone's getting wiretapped not just the suspected criminals.
2. Metadata can be accessed without a warrant by a subset of government agencies defined as "enforcement agencies" in the act. Enforcement agencies include the ATO, Centrelink, the RSPCA and local councils as well as police and ASIO.
2. Many organisations being asked to retain data (ISPs and Telcos) are not geared up to do this. These requirements will impact personnel, process and technology, not to mention the security requirements. ISPs and Telcos are currently only geared up for "law enforcement intercept" to facilitate the wiretapping of persons of interest.
3. The definition of metadata is vague and is likely to expand to meet the "mission". For example, if Senator Brandis wants to find out who has been watching a certain extremist propaganda video on LiveLeak he'll need the subscriber details, source IP and destination URL. Therefore, the government will need to retain (at least) all users’ URL history if they want to achieve this objective. Location data is currently excluded, but this probably means GPS locations, not the physical addresses of service subscribers. If new technologies emerge like peer to peer , the definition is likely to expand or be "legally re-interpreted", perhaps in secret like the US has done in the past.
Do not underestimate the importance and sensitivity of metadata. At a minimum it provides enough information to blackmail someone. The former director of the NSA Michael Hayden places metadata in the context of military use with this quote "We kill people based on metadata". Mapping sets of data about "persons of interest" together, is a point and click activity these days for the software used by intelligence agencies such as Palantir. If whistle-blowers, journalists, activists and politicians become persons of interest there is a threat to the free press, correct operation of our democracy and the independence of our legislators from undue influence by our intelligence agencies.
4. If extremists or paedophiles look on the Internet for twenty minutes they will find useful methods for encrypting, proxying and hiding their communications that still necessitate intelligence agencies compromise the endpoint. Wiretapping will still need to happen for the bad guys.
5. Customers will demand SSL encryption on all corporate web applications to protect their privacy leading to changes of infrastructure and increased costs.
6. To maintain access to the data due to "the scourge of encryption" the government could mandate ISPs to intercept SSL traffic.
7. The existing website blacklist could be expanded in its use. Expect something like YouTube to be accidentally blocked. It has already happened to 250,000 websites following an ASIC request to takedown a scam website.
8. The data retained by these ISP systems and the access made available to it will be unprecedented and poorly secured. It will become a target for criminals, private investigators and debt collectors. There will be incidents of misuse by administrators, including resale of the data. Even nation state threat actors will want this access to these juicy targets.
9. ISPs and telcos will get hacked and the government will blame them for poor security.
In summary Senator Brandis should tell the AFP and ASIO to go get specific warrants and go after the terrorists with justified targeted surveillance, not a society-wide "fishing expedition".

Repost - Icloud, youcloud, wecloud- thoughts on consumer cloud service security

he recent compromise of icloud backups of celebrities has piqued interest in the security of consumer cloud services. To paraphrase Del Harvey from twitter, when you have a million events a day, a one in a million event happens once a day.
Below are a few of my thoughts; securing a commodity cloud service requires a lot of disciplined thinking:
1. If you run a mass market cloud service you need to do some serious threat modelling, including:
  • Consider your users. Not all users are the same. For example, human rights activists and celebrities are at risk of targeted attacks. You will need to categorise your users in enough granularity to apply security controls matching the threats. For example, using birthdates to perform password resets may not be as effective for celebrities.
  • Consider their information assets.
  • Consider threat actors. For example cybercriminals, nation state actors, abusive ex-husbands, garden variety "script kiddies".
2. You should then select appropriate security controls for the threats identified, perhaps even using structured thinking like attack trees or "cyber kill chain" to pick the most effective.
3. You need to test the controls. This includes functional, user experience and penetration testing.
4. You need to have the process and people to be able to respond to security incidents including reports of vulnerabilities as well as breaches large and small.
5. Should consumers be able to opt in for increased security controls, the application of which is arbitrated by the cloud services provider? For example, twitter has a "verified" option for public figures to prevent hoax accounts.
6. Some organisations can and will opt out of commodity cloud services and instead put in bespoke solutions. Instead of Twitter, many companies use Yammer.
7. Large organisations can control the use of cloud services in many ways.
  • A mobile device management solution that uses the iOS API can disable the use of icloud.
  • For example a web proxy can be configured to block or monitor the use of commodity cloud services like Gmail and Dropbox.
Hope these thoughts get you thinking too! 

Repost - Once upon an information security

Once there were mainframes that were standalone systems, fed by punch cards and teletypewriters. They had tight roles, based on access control models, often externalised to the operating system and application.
Everyone wanted access to them so the teletypewriters were extended with serial connections and then modems to allow remote access.
Eventually minicomputers were connected to public networks the precursors to the internet and network services were written and exposed like send mail.
One of the first network worms, ironically released by the son of a computer security researcher knocked a good portion of the internet offline.
Now the first bolt on security product was released - the firewall.
Essentially it was a clever kludge, to the problem of too many publicly accessible computers with default installs that had insecure network services to manage.
Now personal computers blossomed also with the rise of personal productivity software and a thriving shareware culture.
Some clever idiots started mixing in malicious software with legitimate software and we had the rise of computer viruses - malicious software that required user interaction to replicate.
The second bolt on security product was born - antivirus.
Antivirus checked files you opened against a database of known malicious software. This again was just a kludge for the problems of poor user awareness, users running with administrative privileges due to undercooked operating systems and a lack of a mechanism for easily identifying if software was trustworthy before executing it.
Attackers started digging into the network services for vulnerabilities as default passwords and debug functions started getting turned off, and found a soft underbelly in web server software. Web site defacements rose and another product arose - Intrusion Detection - essentially a networked version of antivirus looking at packets rather than files.
This helped operations teams get a bit ahead of the game and respond to compromises of internet facing services in a, timely manner.
Big outages, due to network worms on internal networks affecting the dominant server and desktop operating system, drove Microsoft's boss Bill Gates to issue the Trustworthy Computing memo, essentially telling the company that security needed to be a top priority for the success of the company.
Microsoft started turning the supertanker in the right direction, by introducing security into their SDLC, assisting law enforcement, pushing security fixes via a security bulletin process.
The company also delivered operating systems in which there wasn’t excessive network services running as part of default installs. The user ran under reduced privileges, had cryptographic signing of operating system components and started to address the root causes of poor operating system security.
But now the threat environment had changed, the threat was no longer computer enthusiasts "hackers" being a bit too curious or "crackers" being too destructive, it was starting to become organised criminals.
Criminals figured out that serious money was starting to move through computer systems, and that malware called remote access Trojans, could help them steal credit card numbers and internet banking credentials facilitating fraud.
Now finally spies got on the Internet too, as the majority of the world's information got stored in computer systems.
So here we are at a pretty interesting time in information security. The latest operating systems have got more easy to manage security wise and more security is "built in", but most organisations aren't running them yet.
The threat landscape has changed, with the attackers motivated by financial gain.
Vulnerabilities are no longer being publicly disclosed, but instead sold to the highest bidder.
Often it's the application software like Java and Flash which is now being targeted on the desktop.
Information security professionals now have to worry about nation state backed threat actors as well as organised crime backed cyber criminals.
The bolt on security controls have become less effective, as the threat actors have learned to hide from them, or tunnel through them.
Now we have to chase emerging technologies to stay ahead of the threat actors, as traditional security vendors haven't innovated quickly enough.
Additionally information technology is again re-inventing itself with "cloud" and "mobility" "BYOD" "flexible working" "off shoring" and a handful of other disruptive ideas.
Just as the better operating systems are arriving we are swapping out windows desktops connected too wired networks, with laptops connected to wireless networks.
Executives demand email and apps on their iPads and iPhones, introducing new operating systems and new ways of managing them.
Businesses want "instant on" software as a service applications rather than taking the risk to develop or deploy in house solutions.
It's a rapidly changing battlefield in terms of threat landscape and the availability and effectiveness of security controls.
Information security is having to step up a level and think differently about enabling users and also third parties to secure themselves and securing the data we share with them.
As we lose the ability to enforce security controls ourselves at the operating system layer. 
Infosec - never a dull moment if you're doing it right.

Repost - Cloud DevOps and Security

There is a buzz in Australia, post Amazon’s Web Services Summit, around a few cloud related concepts. I will try and distill.
Public cloud: Enterprises here are mostly talking about hosting Internet/extranet-facing corporate applications on Amazon EC2 AMI instances (think little Windows or Linux virtual servers) behind Amazon’s elastic load balancers (IaaS) or Microsoft’s Azure Platform, which abstracts above active directory, Web servers and database servers.
Private cloud: Some enterprises are getting interested in spinning up VPNs into Amazon to host intranet applications on shared hardware.
Roll your own private cloud: Some enterprises have engaged service providers to create their own private cloud environments on dedicated hardware on their own premises.
DevOps: This is the concept of not only version-controlling and release-managing a software package installation but also the installation/configuration of the infrastructure that the software package sits on by the use of automation. The catch phrase for DevOps is "Infrastructure as Code".
In practice this usually means scripts of some sort that deploy Amazon AMIs, elastic load balancers, configure them, and build/deploy the software package. There are many benefits, namely consistent infrastructure configuration across environments, better (often also automated) testing and the ability to frequently and rapidly deploy. Some single-application, massive-scale, online companies deploy new software versions multiple times a day.
How does security fit into this?
In software deployment there are a few key security controls. Segregation of development, test and production environments—approvals for release from development to test and from test into production should be segregated so that one person cannot write and promote code to production.
Infrastructure security controls also need to be deployed for DevOps style automated deployment. The configuration scripts should securely configure the systems to benchmarks. Did you know that the Center for Internet Security are planning to have CIS benchmark configured AMIs in the Amazon Marketplace?
All this DevOps and Cloud stuff is starting to help us Infosec people with the basics so we can focus on the tricky appsec aspects. I encourage you all to investigate Amazon AWS and Microsoft Azure and start to enter a new and potentially more secure world!

Repost - Some proposed laws of big data

At the recent CSO Perspectives Roadshow I was on a panel with the esteemed David Lacey, he suggested just like Asimov's laws for robotics we need some clear maxims for the security and privacy management of big data.
Well firstly, let's just have a recap of what is Big Data before I get into attempting to draft these laws. Big Data is essentially the techniques for curating and analysing large complex datasets that are beyond the capability of most normal Database Management Systems and data warehouses. These datasets are often accessed by a wide range of researchers, scientists and (shock horror) marketeers to gather new insights into customers and problems. For example diverse datasets about the physical environment could be analysed to identify unexpected impacts of climate change. The study of pedestrian and motor vehicle traffic patterns from smartphone navigation data could be used to improve the "livability" of cities. Many applications and websites use big data with "you bought X you might also like to buy Y" tailored marketing.
So, with that in mind I offer you, Hackling's Laws of big data:
1. Collect the data legally
2. Anonymise and de identify the data to preserve privacy of individuals, ethnic/religious groups etc. before it is ingested into the big data dataset. For example:
4. Log access to investigate misuse of the data.
5. Prosecute misuse of the data.

a) Year of Birth is OK for demographics. Date and month of birth isn't
b) Postcode is OK for demographics. Street number and address isn't
c) Anonymised location history is OK, personalised location history is an invasion of privacy.
d) Use of identifiers as phone number, Social Security Number, Tax File Number should be prohibited to impede data matching and unintended use.
3. Prohibit data matching to re-identify individuals and ethnic/religious groups contractually by using "end user license agreements" and business partner contracts.
I’d welcome your thoughts.

Repost - Open Letter to Canberra: a cyber security policy briefing paper

I just heard that Australia's top cyber security tsar hadn't heard of Tor, the privacy protecting software used by human rights activists and the privacy aware. Well Sachi Wimmer, this blog post is for you!
Here's a few things you should know—because common sense often isn't all that common.
1. The Internet has a lot of people on it. Approximately 39% of the 7.1 billion people in the world are on the Internet. About 27 per cent of those speak English. You have the ability to annoy perhaps 3/4 of a billion people. Don’t annoy the Internet!
There are millions of smart, bored and irritable people on the Internet. They will make fun of you, they may humiliate you or maybe even attack your systems, just for lulz.
Many people in other countries see attempts to militarise or nationalise the Internet by a particular nation as a call to arms. If you think you are smart, there is always someone out there on the Internet smarter than you.
2. There are criminals on the Internet, there are spies on the Internet, there are terrorists on the Internet. There are also businesses, mums and dads, kids, grandmothers, the disabled, and so on. Don't make life hard for the legitimate users of the Internet or remove their privacy.
Just because there are bad guys using the Internet doesn't mean you should treat everyone as a potential terrorist. We don't treat all citizens as potential murderers or put a CCTV camera in every home, we track down and prosecute murderers with no statute of limitations on prosecution and get warrants for searches from courts to gather evidence to support prosecution.
I suggest you empower and fund law enforcement to track down the bad guys with zero tolerance for serious crimes and treat people using the internet as innocent until reasonable suspicion is raised.
3. The Internet isn't like the "real world". Some things just aren't possible and don't have real world analogues. For example, encryption technology (pretty much complex math in the form of algorithms implemented in software) can secure communications from interception. The security of communications is necessary for electronic commerce to occur and to maintain the privacy of individuals.
Your citizens can even use encryption to bypass corporate attempts at thwarting net neutrality by geo-blocking content or slowing down less preferred communications protocols. The march of progress is unstoppable. Propping up out-dated business models through legislation holds back economic growth. If you can innovate with Internet technology you can have a global market beating down your door, driving rapid economic growth.
For example, any node on the Internet can connect to any other node. It means that it is possible to flood systems with excessive or erroneous data which can cause a Denial of Service. A common attack is a Distributed Denial of Service Attack, where many "zombie" computers—a botnet—under the control of an attacker send huge amounts of traffic to a specified target.
One problem with Denial of Service attacks is that if these attacks are at an application layer and communications are encrypted at an application layer, an intermediate party cannot easily identify attacks from legitimate application traffic.
Giving the government a master key for law enforcement intercept or an internet kill switch weakens the security of the systems, as parties better resourced and motivated than law enforcement will also look to gain access to these mechanisms.
It is important to note that the world is slowly crawling towards the next version of the Internet Protocol called IPv6 (the internet runs on IPv4). IPv6 has mandatory encryption built in.
4. Data can move like lightning over the Internet or it can move like treacle or be as immovable as stone. This is due to the size of data and the ability of technologies to transmit data. If data is too large to transmit, maybe t's time for "sneaker-net" where vast amounts of data could still be moved by foot on lots of tiny memory cards.
Bigger network connections may enable businesses models and technologies to become viable. For example, downloading your computer/tablet operating system every time your computer boots, creating a fresh clean secure new copy while keeping all of your data and applications on remote computers operated by others (the cloud) etc.
5. Data costs big money to store in a corporate context. If you wish parties to retain data, that data needs to be structured, managed, secured, stored and backed up in manner so that it can be easily retrieved. This requires people, planning and investment of precious working capital. There are also some really challenging technical and economic challenges that accompany it. It's a lot more complicated than you would think.
6. Beware corporations and their desire to create monopolies. Vendor lock in is a pet hate of IT professionals. Quality drops and costs go up when you are at the mercy of a corporation due to a lack of other viable choices.

Repost - What are worse case, likely and best case security incident scenarios?

When executives are making decisions they like to know the best case and the worst case for their decisions so that they can measure risks. This is because from risk comes reward! A government agency that takes no risks, offers no valuable services. A company that takes zero risks will go out of business. Every service/product/deal always has an inherent risk associated with it.
In the context of information security, a best case is pretty much the same for every scenario. The typical best case you could hope for is that the system or service you are securing will be used for a long time and will eventually be decommissioned without compromise, with any known security incidents reported. If you’ve spent nothing to secure it, perhaps you "bet the farm" and won the lottery—quite a gamble.
Depending on your "risk appetite" if you spent some money on securing the system, at the end of the time period you will see it as either a prudent investment that reduced risk throughout the lifecycle of the system, or an example of opportunity-cost because critical funding was misplaced at the time.
The worst case scenario in the information security context is the true inherent risk of the system or service. This is where a compromise has occurred and all security controls failed— or they were not even implemented in the first place. The organisation's protective security controls failed to block an attack, the detective security controls failed to detect the attack and the security incident response was un-organised and ineffective—and the organisation was actually notified by a third party, maybe even the media.
The worst case scenario is a breach occurring, with information disclosed or modified and fraud or identity theft or similar resulting. For example, a web server is compromised by a targeted attack. The firewall configuration is not effective and the internal network is penetrated. The customer database is exfiltrated and this also accidentally contains cardholder data because truncation is not being performed. The incident is not detected, so widespread fraud and identify theft occurs and the organisation’s bank actually notifies them of the theft, requiring an investigation. A fine is likely, as well as a class action lawsuit, media coverage, share price impact and other adverse outcomes.
The more likely security incident scenario is that the organisation’s controls are tested, and prove effective. For example, a web server is compromised, but the firewall configuration is effective and contains the intrusion. The intrusion detection system doesn't detect the compromise but a system administrator does his daily checks and identifies the incident. A security incident is declared and response is timely—containing and recovering from the incident. No adverse media coverage or reputational damage is encountered, and embarrassing calls from banks are avoided.
It is important to always consider the worst case scenario, as this helps us as information security professionals to think like an attacker and design "defence in depth" into our solutions by imagining all that can go wrong. However our design decisions should always be tempered while thinking about the most likely security incident scenario. This helps you to select cost effective security controls commensurate with risk.
So a few common sense maxims for you to consider in your practice:
1. Hope for the best, plan for the worst. Don't undercook security controls protecting high risk systems. 2. No tin foil hats. Don't overcook security controls protecting low risk systems. 3. Explain to stakeholders the most likely security incident scenario. Avoid sowing Fear Uncertainty and Doubt (FUD) in your interactions. 4. Use the most likely security incident scenario to drive the selection of security controls. 5. Use the most likely security incident scenario to drive the decision to penetration test (otherwise you'll test everything based on a worst case scenario). 6. Use the worst case scenario to select key security controls for enhanced assurance. The worst case helps you identify the key controls that prevent, contain and respond to incidents.

Repost - Executive engagement, securing funding and improving information security

Many information security professionals complain that they are under-resourced and unable to effectively execute on their mission.
The problem is that they are under-funded and this is due usually to a lack of executive engagement. Here are a few tips to help the propeller heads amongst us engage with the C Suite a little better.
1. Allocate someone to engage with executives, if you don't have a CISO, pick the most politically astute person amongst the team. It might even just be you at the start.
2. Identify the stakeholders you need to influence to set a budget for information security (CIO, CFO, CEO, COO, CRO etc.)
3. Build an elevator pitch tailored to your organisation. "Hi I'm Matt from Information Security, my job is to help keep MyCo safe by providing secure, cost effective options to the technology group and the business at large".
4. Informally engage with the stakeholders, a phone call, a request for coffee. Introduce yourself and your mission with your elevator pitch. Attempt to understand their objectives, measurements and motivations (e.g. CRO reduce risk, CFO reduce costs).
5. Formally engage with the stakeholders. Perhaps establish an information security committee. Invite the stakeholders, if no-one turns up send minutes and start buying lunches instead of coffees.
6. Send out a bulletin, with news stories about relevant security issues with an explanation of what your organisation is/isn't doing to mitigate these threats. Tell the good and the bad.
7. Develop a security strategy (essentially a 3 year plan) and communicate it informally, refine it and communicate formally through your new committee. Include proposed organisation chart, old technology to refresh, new technologies to introduce, new processes and policies to develop and implement etc.
8. Set a budget to execute on the strategy, include capital expenditure (hardware, software, licenses, maintenance) and operational expenditure (permanent staff, contract staff, consultancy etc.) Include a "sacrificial cow" that you can discard as a bargaining tactic, and identify the essentials and the "nice to haves".
9. Submit the budget through the formal channels. Negotiate and influence through informal channels.
10. When budget approved, execute! execute! execute! If you can't get projects completed and outcomes achieved, you won't build trust that you are a good custodian of precious working capital. Hand back funds if you can't spend them in a corporate environment, the opposite in a government one.
11. Report on progress to stakeholders on a regular basis with the good and the bad. Celebrate quick wins.
12. Prepare an end of year review for your stakeholders and comparison to the 3 year strategy launched.

Repost - UPnP unplug and pray? HD Moore the court is in session!

I’ve been thinking about how the HD Moore managed the UPnP issue. How do you think the court of public opinion will judge it?
Imagine this:
Judge: Welcome to the court of public opinion for this hearing on whether responsible disclosure practices were undertaken during the release of a white paper on UPnP vulnerabilities. First, we shall hear from the plaintiff.
Plaintiff: HD Moore you are brought here for the crimes of grep'ing open source code, hyping years old vulnerabilities in software to shill commercial vulnerability assessment software and not conducting a coordinated disclosure and fix campaign like Saint Dan Kaminsky with that DNS bug. A fix was only released for one of the SDK's on the 29/1/2013 the day of the blog post announcing the research and there are still fixes pending for other reported issues. This sets the scene for a similar incident such as the SQL Slammer worm, with attackers reverse engineering the source code and developing an exploit. We call for scorn and derision.
Judge: Some fair points, over to the council for the defendant.
Defendant: Your honour, HD Moore has brought much needed attention to a critical vulnerability affecting home users and small businesses not able to afford access to top notch security researchers. He has helped arrange a fix with the developer of the vulnerable software development kit before release of the research. His employer has provided free software for identifying the vulnerability, a web page for checking if your internet router is vulnerable, and hence not ransoming organisations to require them to buy his vulnerability assessment software. He can't be expected to liaise with the hundreds of consumer hardware manufacturers utilising open source software, some of which don't even speak his language. He has sat on these vulnerabilities with one vendor since 2008. In summation HD's a good Samaritan. You shouldn’t persecute him.
Judge: My judgement is that HD Moore, you have jumped the gun on the release of this research.
1. Check your router for UPnP Vulnerability - http://upnp-check.rapid7.com/ 2. UPnP vulnerability whitepaper available at: https://community.rapid7.com/servlet/JiveServlet/download/2150-1-16596/SecurityFlawsUPnP.pdf 3. libupnp project http://pupnp.sourceforge.net/ 4. miniUPnP project http://miniupnp.free.fr/

Repost - Ministry of Social Development - a study in security architecture and governance failure

In case you haven't heard, a high profile blogger acting on a tip off identified that pretty much complete access was available to all the internal file shares on the corporate network of New Zealand's Ministry of Social Development (MSD) via their public access kiosk computers. Other interesting facts have come to light such as that vulnerabilities were reported in a penetration test and not acted upon.
Firstly let's think about the security architectural failures here.
1. There was no network segregation between the kiosks and the greater MSD network. This may indicate a lack of understanding of the requirement for and use of a security "zone model"
2. Some of the most sensitive data available to MSD was stored on open file shares. This may indicate a lack of understanding of the "principle of least privilege"
3. Virtualisation snapshots of servers were also available on open file shares, allowing an attacker to potentially "takeaway" internal servers to crack into their contents. This may indicate a lack of security considerations around backup data.
Secondly, let's think about the security governance failures.
1. A penetration test report was commissioned from a third party (Dimension Data) but recommendations were not acted upon. This indicates that either the risks identified were not actively tracked in a risk register, or that risk was accepted without treatment due to a misunderstanding of the context of the risks. If we use some "black hat thinking" to coin DeBono, it is possible that the report was "locked in the bottom drawer" and conveniently forgotten about by a project manager or similar functionary, who would be personally impacted by a budget overrun.
2. No information classification scheme was applied. It is normal for "normal organisational data" to not be labelled, but the most important data in the organisation should be classified. For example locations and details of "at risk" children should be classified and highly restricted.
3. Security controls were not applied commensurate to the security classification of the data. It would make sense to encrypt and password protect such sensitive data as well as storing it on restricted file shares.
If you think this was bad, imagine what a "black hat" could have done with a boot CD or USB of the backtrack security testing operating system in one of those public access kiosks...obviously some people at MSD can't.

Repost - security advice for small to medium enterprises

I've been engaging by some smaller companies recently and it has given me some insight into what the "best bang for buck" information security activities they should be doing. Here’s a list of some of the fundamental security controls they should consider.
1. Download a Business Continuity Plan (BCP) template from the web and back up your systems, put in mitigations for your critical systems that support your critical business processes.
2. Perform pre-employment screening and criminal background checks on new employees and subcontractors.
3. Require compliance with your security policy as part of new employment contracts and introduce a disciplinary process for non-compliance.
4. Download a copy of an ISO 27002 compliant security policy template from the web and customise it to your business.
5. Refresh job descriptions to include compliance with relevant aspects of the security policy.
6. Develop an information classification scheme (eg. a table with relevant examples of what is to be classified as what) and label the most important data in the company (eg. salary information, trade secrets, customer lists).
7. Develop an information asset handling procedure (ie. don't put HIGHLY CONFIDENTAL data on USB sticks and laptops).
8. Conduct security awareness training and require staff to acknowledge an acceptable use policy afterwards.
9. Ensure anti malware software is on all systems and centrally managed so it is always up to date and running.
10. Ensure all users are subject to email and web content management to stop the ingress of malware at the network border.
11. Ensure a firewall is in place, internet facing systems are in a De-Militarized Zone, outbound internet access is restricted to only through the web content management system and the firewall configuration is documented.
12. Ensure administrative and privileged accounts are not shared between staff and service accounts for applications are documented and have strong passwords set. Also avoid sharing of passwords for administrative accounts.
13. Implement identity management processes for user access provisioning, de-provisioning and regular review.
14. Implement a security incident management process.
15. Find out all of the free security features available in your applications, databases and operating systems - and configure them. You will be amazed what is possible with WSUS, Group Policy and Office's Information Rights Management in a typical Microsoft centric network environment.

Repost - Public Relations and information security

Too much and you can over-extend and it can go "pear shaped". Too little and you can be branded as uncommunicative and unreasonable. For large corporations the linkage of a brand and any security issue can have a negative effect on share price and immediate financial repercussions.
A large vendor with great responsibility providing supporting infrastructure recently addressed a zero day vulnerability with no media release until the security patch was issued. This made security operations personnel and security aware individuals around the globe very nervous.
Here are some guidelines to consider for your organisation:
1. Tell the good stories on a regular basis. Issue a press release when customer visible or customer impacting security features have been implemented. For example if you introduce optional two factor authentication for customers, promote this competitive advantage. For example if you are a social media site a press release announcing successful implementing of password hashing of your database further to recent compromises at competitors would be a good story to tell. You will note some financial institutions even tout their fraud monitoring capabilities with television advertisements which reduce losses for them, and the potential inconvenience for you. The story may be: we're making security easier for you because we're monitoring your transactions for fraud, rather than giving you extra security controls to deal with.
2. Have a social media policy, press relations policy and educate your employees to not speak, and when they speak to "stay on approved message". I've experienced jaw dropping occasions where CIOs of major corporations share "bright ideas" (un-vetted by the corporation) that could be taken right out of context by the media. If you are a senior person in information security, ask to be consulted on any press releases, advertisements or planned presentations involving security, information exchange or third party relationships.
3. If you are communicating bad news, do it in a timely manner. Have a "canned" pre-approved factually correct press release that mentions that the organisation has been made aware of a security incident and is working on responding via prior established security incident management process and procedures, and that updates will be provide to protect affected stakeholders when actionable information is available. When information is available on the extent of a security incident, be as transparent as possible with affected stakeholders without outlining new/existing security controls and control gaps that could negatively impact on your security posture.
4. Think of the implications before you act. If there is a major decision coming up related to security, think about the public relations upside and downside as well as the legal exposure if the decision became publicly known. Before you call the cops on a security researcher or issue a cease and desist letter, think of the available options. When all else fails, quote the Google motto and "don't be evil".