barefoot cybersecurity

security… mobility… cloud… technology… whatever…

Leave a comment

Estate Agents and GDPR

Thousands of UK estate agencies are yet to take their first steps in compliance with the new General Data Protection Regulations (GDPR) that will come into effect on 25th May 2018.

Crippling fines threaten those who continue to market properties to their existing client base and take no additional steps to protect, seek consent for or to protect those clients’ personal information.

Estate agents, along with many other industries have been inundated with information and training courses offering advice on how they should change their working processes in line with the new regulations, but precious few have been able to actually enact sufficient change in their business to comply.

Fortunately, help is at hand. Barefoot Cybersecurity has developed a service aimed specifically at estate agencies that will assess, monitor and manage GDPR compliance on their behalf. One of their first UK customers was Essex based agency, Court & Co. Their Managing Director, Nicholas Court, takes up the story…


“When we first sat down as a company and tried to understand the full implications of GDPR we really couldn’t grasp just how large the task was. What actions we will be required to take and how it will affect or business.”

“We realised this is not a wait and see what everyone else does first situation. We had to take action and put a plan in place. We generate a large amount of our business through regular contact with our database. A database we have spent over a decade compiling and totaling about six thousand contacts. It forms the backbone of our business. The thought of not being able continue to contact these individuals after 25th May filled us with dread.”

“We enlisted the help of Barefoot Cybersecurity who carried out a full assessment of our business practices. Helped us to fine-tune our customer journey and trained all of our employees quickly and efficiently.”

Barefoot’s service works by taking advantage of the regulations allowance to assign the duties of a Data Protection Officer to a third party. Further, by focusing on niche market verticals, like real estate, they are able to deliver a streamlined and cost efficient service as many agencies are battling with similar issues. Any unique processes are established during the initial assessment phase after which they supply any documentation and training required as well as a set of targeted, clear directives regarding outstanding issues preventing compliance.


Nicholas continues;

“By using their online compliance dashboard we can see at a glance that all our team are 100% trained in each of the required fields. We also feel safe in the knowledge that our computer systems are being constantly monitored by Barefoot’s Security Operations Centre for any malware, spyware and any unwanted guests trying to hack our data.”

“After the initial set up which was carried out over a couple of scheduled 30 minute calls we were up and running and able to get back to what we do best. Sell Property!!”

For details of this and other Cyber Security services, contact



Leave a comment

Mobile Device Management 101

Like it or not, smartphones and tablets are all over our enterprise networks. Users now expect to be able to carry around their mobile devices and have instant access to their corporate email account and documents. As business leaders we mostly embrace this change and welcome mobility into the work environment. We want to empower our users to work from anywhere and increase productivity levels using technology they enjoy carrying. However, those of us with at least one foot in the security department understand the greater risks of letting users loose to roam the globe carrying the companies’ most sensitive information on the world’s most steal-able devices.


Times where that only remote users and exec level employees would be issued with a company owned device. But when a technically savvy general population where able to buy their own smartphone and Apple devices started to make their way into the boardroom, we saw the introduction of BYOD. Bring Your Own Device. The move towards mobile working for all is here.


BlackBerry brought mobile email to mass business market so we installed enterprise servers to enforce some security and increase user experience (who needs to know how to configure an IMAP server on a phone anyhow?). The landscape has changed and most businesses trying to manage devices from Apple, Samsung, BlackBerry, Nokia to name a few, will struggle without some kind of enterprise software solution to roll out configurations and enforce security.

This is the primary function of a Mobile Device Management (MDM) solution. Manage which devices belong to each user and configuration policies based on various criteria. For instance is the device owned by the company or the employee, what is its operating system and what is the confidentially level of the information the user is accessing. These policies are enforced with rules that specify the consequences of being out of policy including blocking further email to the device or sending SMS and email notifications to the user and IT administrators informing them of policy violations.

Furthermore leading MDM solutions understand the source of the data residing on each device and therefore which data belongs to the user personally and which belongs to the business. This is extremely useful when an employee leaves the business to ensure that not only is their access to corporate resources revoked but the companies data residing on their device is wiped whilst preserving the user’s personal email, docs, photos and music. This is known as a selective wipe of the device which may be done in preference to an entire wipe of all data if, for instance a device was permanently lost.

Basic security

According to, $2.7 billion worth of smartphones were lost in 2011. It’s fair to say that the majority of those phones did not have even the most basic of security measures applied, a simple PIN code to protect privacy. Businesses today invest money and effort enforcing data leakage controls on their networks to protect company information so it stands to reason that if a user is to read their corporate email on their phone, a security policy should be applied. Of course you could just tell a user to please enable a PIN and encryption but it would be impossible to police against users turning it off because of the inconvenience. MDM enforces these policies and has the capability to wipe all the corporate data if users do not comply.

WiFi/Email/VPN Settings

Imagine the time and effort involved in supporting every new user on multiple difference mobile operating systems configure their Exchange email account, connect to the WiFi, configure the VPN. With MDM, these configuration settings can be distributed, over-the-air to each new user that is registered on the MDM server.

Application Control

Another key benefit of MDM is the integrated ability to manage mobile apps for business users. Businesses typically want the ability to securely manage which mobile applications are installed on their devices. Administrators select which applications are required, allowed or disallowed. The company’s preferred CRM and VoIP app may be recommended to all uses whilst it difficult to think of good reason why Angry Birds should be installed on a company owned iPad!

Document Control

Then there is the Dropbox problem. Unless users are provided legitimate methods to carry around the documents they require, the will inevitably find an alternative way. Dropbox and Gmail give users exactly that opportunity which is why leading MDM solutions provide access to, storage and viewing of documents from email and SharePoint and enable the business to protect these documents from unauthorised distribution.

…and the rest

Of course, MDM can deliver much more. Depending on the solution used, you may disable certain features like the camera or Bluetooth on company owned devices. Ensure that devices only connect to WiFi networks with adequate security and can be located on maps if lost. Often there are cost control features to warn or prevent excessive usage and roaming charges. The detection of jailbreaked or rooted devices that are more susceptible to malware is another important feature.

In all of this, it is key to remember to work with a vendor that can support all your operating systems and preserve the user experience. One thing is for sure, today’s users are as smart as their devices and if their user experience is spoilt by a poor implementation of MDM, they will look for a way to bypass it.

Leave a comment

How Secure is Your Active Directory?

An increasing number of enterprises, in line with leading information security advisory bodies around the world are starting to wake up to the fact that to develop and maintain a continuous view of their security posture, a new approach is needed. Regular monitoring. Not just of their devices, applications and data but of the one, most constantly evolving and unpredictable element of the network. Its’ users.

Today, Microsoft’s Active Directory (AD) is the most widely deployed and relied upon tool to provide user authentication and authorisation and therefore represents the keys to the kingdom to all of an enterprise’ network resources.

The Problem

AD is dynamic in nature. It changes constantly over time and many organisations have been reliant on AD to provide access to sensitive resources for many years. Without regular management of your AD environment, small errors introduced, sometimes a long time ago, can develop and manifest themselves as serious implications relating security, compliance and operations.

Individual AD user accounts may provide access (either directly or via a group membership) to a wide array of devices and information. Accounts often become “orphaned” either by employees leaving the business or were created for temporary staff, contractors or maybe were only ever created for test purposes. Operating procedures should stipulate how these accounts are retired but are frequently overlooked.

As individual accounts embody the basic entity around which security policy is structured, it is common for auditors and custodians of these resources to ask about existing accounts, why and when they were created. It is all too often, an impossible question to answer.

The same is true of user groups. Users obtain group memberships to fulfil a specific function in the environment. However, people’s functions within the enterprise evolve, perhaps moving to another departing meaning a user no longer need to access resources. Group membership should therefore be revoked but experience suggests otherwise. When you add in that group memberships can be inherited, the problem is exacerbated.

Furthermore consider what happens following major restructuring of the AD environment by moving an Organisational Unit (OU) under a new parent OU. This could result in the new parent OU gaining administrative rights on objects within the new child OU. Objects within AD probably have security policies via Group Policy Objects (GPOs) which are applied directly to these OU objects.

The Solution

Recommended best practice from leading industry experts is periodic security audits to ensure that AD is being properly managed and protected. There are some key areas around which such audits should focus.


When new accounts are created there are several attributes that are useful to know beside the account ID and the Distinguished Name (DN) (the path by which account may be located within the directory tree). One the most important pieces of information is the identity of the person who (or at least the administrative user which) created the new account. Should that user have authority to create new accounts? Was there a valid reason for its creation? These are typical resulting questions.

A clear picture of the time and date accounts were created as well as the server used are important pieces of forensic information in the verification of appropriate use.

The volume of new accounts over time should also be carefully monitored. In particular failed attempts to create new accounts could indicate a potential risk to the business.

Finally, accounts which have not been accessed for a specific length of time may be redundant or orphaned. These orphaned accounts represent a significant security risk and are frequently exploited by attackers to launch an attack. At the very least, an orphaned account represents valid login credentials of a former employee who still has potential access to networked resources.

Group Membership

An individual’s effective rights and permissions within an organisation are dependent on the groups to with they belong. Therefore we must gain a level of appreciation towards their group memberships and inheritance.

As part of an AD audit, groups with high levels of access, such as the Domain Admins group, should be reviewed for their memberships. Additionally, any further periodic changes made to group memberships should be verified via an audit.

Organisational Units

To ensure that the AD environment is being well managed and that security policies are being applied correctly, it is highly recommended the OU is audited regularly. This would include any changes made to an OU and report specifically on any GPOs applied or removed.

Leave a comment

The Basics for Hard Disk Encryption

So you realise you (or your staff) are walking about with all that confidential company data on a laptop drive. OK, so it’s password protected but that will not prevent an attacker from booting the system from a thumb drive or removing the hard disk to access it from another system. Without some kind of hard disk encryption that data is easily readable to anyone with access to the drive.

There are three basic solutions to protect the confidentiality of a hard disk’s contents. Here’s a brief overview of these options.

1. Software based file/folder/partition encryption

One simple approach is to specify a directory or partition on your disk to write any sensitive data. Using software based encryption any data written to this location will be encrypted “on the fly”. Permission to access this data (through authentication) is typically provided using a password (maybe your Windows login).

One benefit to this approach is that your operating system can remain on an unencrypted disk location which maintains performance and avoids complications associated with OS maintenance and patching. However, you are reliant on the user writing their data to this secure location. Often a disk can be partitioned and the user instructed to store all their classified information there but habits dictate that email attachments still end up on the desktop and vulnerable. Open-source tools like Truecrypt are frequently employed to freely provide this kind of functionality although without commercial support.

Safend provide a unique solution for Windows that automatically encrypts user generated content whilst avoiding system and program files which goes someway to solving this.

2. Software based full disk encryption

With full disk encryption, pretty much everything on the drive is encrypted meaning that the choice is no longer the users’ to make. However, the performance and maintenance issues mentioned above now come into consideration.  Also the boot loader remains unencrypted (note I said pretty much everything) to allow the key to be available for the password authentication process, and this remains vulnerable to attack.

To protect these keys further, one can use the Trusted Platform Module (TPM) which is standard on today’s PCs. The TPM is a secure hardware device (or chip) on the motherboard which will enforce the drive can only be read by that specific system. Generally speaking hardware based security offers more protection than software.

Microsoft now includes their BitLocker encryption on Enterprise and Ultimate editions of Windows 7 and in Pro and Enterprise editions of Windows 8 which can be used in conjunction with a TPM.

Other leading enterprise solutions are available from the likes of McAfee, Sophos and CheckPoint.

3. Hardware based full disk encryption

Hard disk security has evolved further, pushed security deeper into the hardware layer with the new generation of self-encrypting drives (SED).

With SEDs the encryption key is embedded in the drive controller itself. Authentication takes place as the drive is powered on with a BIOS password which can also become a single sign-on to the domain. Therefore from the moment the drive starts, all data is encrypted on the way in and decrypted on the way out. No performance issue and no software based authentication.

It is not possible to access the stored key without physical destruction of the drive and the data cannot be read without the encryption key.

Wave Systems and Seagate pioneered SEDs to support the OPAL standard developed by the Trusted Computing Group (TCG) and today Hitachi, Samsung, and Toshiba all produce OPAL compliant drives.

At the time of writing, SEDs can be purchased for as little as $20 more than the equivalent specification non-encrypting drive.

Leave a comment

Scrambls gives you control over your personal content online

Increasingly our identities are viewed though what we say and do on the web.

Facebook, Google+, Twitter, YouTube, Yahoo, LinkedIn, Salesforce, Gmail, Skype, AIM, WordPress, Foursquare, Bebo…  (I haven’t even researched yet!)

Whilst these social networking tools are all very nice and, well… sociable, we hear endless warnings about protecting our privacy online. Whilst each of these sites and technologies has a set of privacy controls, if you are a frequent user of several networks the job of keeping these all in check may not be as high priority or as simple as it should be.

The problem with the web is, once your info is out there, you’ll never be able to say with any degree of certainty who is reading it or how to retract it at a later date. Do you trust your data with everyone? If you delete your account, is everything you ever posted going to be deleted and how will you ever know or check?

Scrambls looks like a cool solution to wrestle back some control over your personal content online. It uses a browser plug-in or mobile app to scramble (using, it would seem, some proprietary encryption) your posts so that they may only be read using a key stored on the Scrambls server. The author owns these keys and can set the policy for whom and when to make them available. The thing for me that makes this smart is that when you decide it’s time for your information to disappear, you delete the keys and the info remains scrambled forever even if your account is never deleted.

How Scrambls woks

Whilst this may be overkill for many users’ public discussion about the latest tabloid headlines, I can definitely see uses for more sensitive information like protecting our children’s information, business data and material with personal copyrights.

For more information, go to

Leave a comment

Free SLL Server Test.

Many vendors provide free (freeware, freemium) tools to attract you into trying their product and getting hooked.Trying out these, often unsupported technologies, is a disappointing experience when they either;

  • don’t deliver all they promise
  • ask you to upgrade to the premium product before you get anything useful
  • bombard you with ads (or worse)
  • all of the above!!


However, occasionally it’s possible to stumble across a gem. Here’s one I’ve found useful recently.

Qualys SSL Labs

Ever wonder just how secure that new online shop is you are just itching to pump your credit card details into? Well you should.

This free service performs a deep analysis of the configuration of any SSL web server on the public internet. Simply type in the address of the secure website (https) you want to test. The service will firstly look at the certificate to verify that it is valid and trusted. It then inspects the server configuration for protocols, key exchange and cipher support.

Finally a percentage score is awarded to the individual categories and a total score and grade displayed.

I testing the usually popular services, Gmail, Facebook, Skype etc . as well as a several online banks and, as you’d expect they all achieved a grade A (80% or above). Then I tried the South African revenue service (I need to do my tax return!) at Only 61%.

What other government sites can I test? FBI? Only 57%!

Both were let down by the key exchange phase making them vulnerable to denial of service and/or man in the middle attacks.

Next time my finger is hovering over the “check-out” button, I’ll be sure to test here first.

Leave a comment

Sensitive Data: Coping with that file server!

Almost every enterprise has some kind of legacy file server. You know, the one that stores all the information that has to be accessed by the various business groups. Great swaths of spreadsheets, presentations, photos, accounts, client info and (possibly!) some illegally shared media.

This unstructured data, grows rapidly and organically. Many larger enterprises have sensitive data, which may contain confidential and/or personal information, residing on file stores where there is an insufficient understanding of exactly what this data is and who is accessing it. The over-permissive nature of global directory groups such as the “everyone” group, means that there is little control about exactly where in the enterprise this sensitive data is written.

User entitlements to view certain groups and folders evolve over time. This entitlement is rarely reduced yet regular reviews of user entitlement by manual methods are time consuming and therefore generally ignored.

  • No intelligence with regards to where sensitive data resides, who owns it and who may access it.
  • No intelligence over data that no one is accessing. Typically 50% of data becomes ‘stale’ after 90 days.
  • Over permissive access. Statistics indicate more than 95% of file access activity is not audited by IT.

I usually recommend a phased approach to tackling each of the issues identified. The first phase is to identify data sensitive data, data owners and data that can be archived or deleted. The second phase is the more strategic process of applying policies to the classified data and controlling and auditing access to it.

Phase 1: Identification

The initial task of locating this sensitive data may be appear overwhelming given the size of a typical enterprise file server and that realisation that sensitive data could reside literally anywhere within it. Key to making this task more manageable is to reduce the overall amount of data in which there MIGHT be sensitive information by identifying the data which can be clearly classified as NOT sensitive. This can be extended further by identification of data which may be archived off potentially expensive enterprise storage and other data which no one is accessing at all. By categorising information in this way we are gradually narrowing down the portion of the data set which may possibly contain our sensitive information.

Categorisation by File Type

An important early step is to gain a high level overview of the file types that constitute the unstructured data set. This provides two important benefits:

  1. Quickly identifies files that would not contain sensitive information. Typically you could include PowerPoint and audio/video files in this category.
  2. Locates data which the business has no requirement to be centrally stored. Personal MP3s might constitute this category.

A data governance solution typically does this in a couple of ways. Firstly by reporting on the file types being accessed and the number of events on each and secondly by the locations of files based on their file extension.

Classification of known sensitive data

There are some basic criteria which can be used for data classification:

  • Time criteria is the simplest and most commonly used where different type of data is evaluated by time of creation, time of access, time of update, etc.
  • Metadata criteria as type, name, owner, location and so on can be used to create more advanced classification policy
  • Content criteria which involve usage of advanced content classification algorithms are most advanced forms of unstructured data classification

Use of an automated classification framework (like Varonis DCF) provides visibility into the content of data across file systems and can be utilised to locate data which is easily classified as sensitive. For example you could conclude that any file that contains a personal ID number is considered personal information and should be protected in line with appropriate guidelines.

Equally, any documents previously classified by their file properties or by keywords (e.g. “Company Confidential”) can be quickly located. These are example of “quick scores” in the reduction of data with unknown classification. Other examples of easily classified data include files containing:

  • Policy numbers
  • Phone numbers
  • Postal Codes
  • Bank account numbers
  • Credit card numbers
  • Passport numbers
  • Keywords
  • Personal (out of domain) email addresses

Identification of inactive directories

Enterprise file stores typically contain vast amounts of data that is no longer in use and therefore stale. It’s very difficult to determine where that data resides, so it remains in expensive file systems, possibly exposed to risk due to excessive permissions. This task can be greatly reduced with an automated solution to identify and report on inactive directories which may then be archived pending deletion. In many cases the amount of server space reclaimed during this stage will, in itself pay for the capital outlay of such a solution.

Identify data owner by folder

Having been through the processes above, the subset of data that remains unclassified is substantially reduced. The next step would be to classify the remaining data by involving the data owners. This would be done at strategic levels within the folder hierarchy. Organisational data owners could be your biggest asset in the battle to identify and locate which data is sensitive and that which is not.

Data owners should be making decisions and taking responsibility and correctly classifying their data. Without a data owner that understands the sensitivity, importance and organisational context, data cannot be managed and protected by the right people.

By analysis of permissions and directory services it is possible to identify folders closest to the top of the hierarchy where permissions for business users have been explicitly applied. These folders should have assigned data owners. An audit trail of every open, create, move, modify and delete on the file system should be kept. By analyzing this data over time, it is possible to provide actionable business intelligence on the probable data owner of any folder.

Identify data stored in other locations

Managing the enterprise file servers is not the final solution for protecting unstructured data. There are other locations onto which data can be stored which will need to be identified and managed. Mail servers and content management solutions (like SharePoint) can be managed using similar technology.

However, arguably a more difficult problem to solve is that of sensitive documents residing on the local disks of your network endpoints. Laptop users are particularly liable to drag documents onto the desktop so they can access them offline. The business likely has little or no insight to this unsecured data and becomes at risk of faulty business processes. With the data classification rules now largely understood they can be utilised in locating this locally stored information.

DLP and similar endpoint protection solutions will map and locate sensitive data stored on workstations and laptops. This can run in the background with minimal impact on productivity saving valuable time and improving efficiency of the data discovery process. Typically such tools allow logging and reporting on the use of this locally stored data and as such should be implemented as part of the data protection program.

Phase 2: Control of data by sensitivity level

With the majority of the data set classified, it is time to work again with the data owners to assign a sensitivity level to the data. The level of sensitivity should differentiate between valuable information that carries a high level of risk and other information that may be sensitive but carries less risk if exposed or lost. Common practice stipulates the following levels:

  • Confidential – Requires significant protection as disclosure may seriously harm the business
  • Private – Associated with an individual to whom disclosure might not be in best interests
  • Sensitive – Requires protection due to regulatory conditions
  • Public – Information that is already public knowledge

Policy based on Sensitivity level

Policies should be designed to provide details on how to protect information at varying sensitivity levels. Consideration should be given to the following issues:

  • Access control requirements
  • Marking/meta-tagging of files
  • Electronic distribution/transmission
  • Storage requirements
  • Retirement and disposal of outdated information

Organise & Restructure

With sensitivity level in mind the file stores may be organised and restructured. Data owners should be assigned at strategic levels in the folder hierarchy. These data owners will be the custodians of the information going forward and should be ultimately accountable that data residing under their jurisdiction is managed in line with current policy.

Data protection

A continued program of data protection should be adhered to. Central to this is the periodic re-scanning, auditing and reporting of information residing in the unstructured data environments. Further consideration could be given to data residing on users’ local hard disks. An endpoint protection solution can be implemented to periodically discover locally stored, sensitive information based on the data classifications that are now defined.

Leave a comment

MDM: MobileIron Vs Good Technology

Most of us working in Information Security understand today’s requirement to secure our users’ mobile devices. Why would any enterprise allow it’s most valuable asset (it’s confidential corporate data) to reside on the most stealable devices (tablets, smartphones) without enabling at least the basic security like PIN enforcement, encryption and the ability to locate and wipe if necessary? Sounds obvious right?

I’ve been working with MobileIron for Mobile Device Management (MDM) solutions over the last year or so to implement (mostly) enterprise “bring you own device” (BYOD) strategies,

I was recently given a demonstration of the Good Technologies MDM product at the ITWeb Security Summet. Importantly there is a major difference in the way it works in comparison to MobileIron. Good Technology requires that usage of protected information (mail, docs etc) is done from within their mobile app. This means to read your corporate email, rather than going to your native email application on your device, you have to launch the Good App and work within it. From what I see, their claim is that this provides a sandboxed container to secure all the protected information and enable easier management for wiping etc.

MobileIron has a mobile app which manages the connection to the server. When a device a registers with the server a profile is installed which manages security. This means that users use their native OS apps. People tend to buy an iPhone because they like using IOS and Apple devices. Same is usually true of users of other devices. My view is it’s important if people are to be encouraged to “bring your own device” that they are empowered to maintain that native user experience. Otherwise they start looking for ways to avoid using the app by putting docs in Dropbox or forwarding to Gmail.

If you’re looking for some validation of this, just look at the reviews on the Apple AppStore:

Strange also that Good Technology does not support RIM BlackBerry. I know it’s dying but surely the user base is large enough?

The 2012 Gartner Magic Quadrant for Mobile Device Management Software can be viewed here.