The common perception of Internet Security is that if a website uses the SSL protocol to provides a secure channel to transmit private details such as passwords and credit card numbers, then the web site itself is inherently secure.
In reality, however, SSL only provides assurance for the integrity and security of information in transit between a client and server.
Security is, in general terms, the pro-active protection of assets against threats.
An asset may be a physical asset – as in the contents of your home; a logical asset – such as information key to your business; or some other, less tangible asset – such as the goodwill of a business as a going concern, or simply the ability to conduct business.
A threat is any event that has potential to remove, damage, destroy or otherwise access a physical, logical or less tangible asset.
Potential threats include, for example, human error, fraud, theft, fire, flooding, lightning, software faults, terrorism and malicious attack.
Security applied to information technology
In the context of business, computer systems have evolved – and continue to evolve – from simply being useful tools used in business such as word processors, and spreadsheets to encapsulate entire business processes through customer relationship, workflow and document management and, indeed, represent the business in Internet-based applications.
In many business areas; finance, travel, logistics and even some forms of retailing, the correct operation and continuity of computer systems is fundamentally important to the success – even survival – of the underlying business.
It is for this reason the question of security in Information Technology is becoming increasingly important: as systems are interconnected within corporate local and wide area networks and, more recently, interconnected through the Internet, the risks to computing assets – information, processes and the integrity of computer systems themselves – has multiplied.
A brief history of IT security
When computers were first invented, they were used for very specific roles; for example, to calculate ballistic firing tables for the military and code breaking.
At the time, security was essentially physical: computers were located in physically secure environments and relatively few people had access to them.
The computers themselves only had a small amount of volatile memory and did not have any backing storage, so the program and any data inputs required by the program were provided using an early input devices such as “plug board”, punched paper tape, or punched cards. Any output generated by the program was usually written to Teletype machines, punched paper tape or cards for subsequent processing.
On early machines, only one program was running at any given time and the program, data inputs and subsequent output were all “physical” objects where security could be assured physically: paper tapes and cards can be locked away and records that were no longer needed, destroyed easily.
The need for logical security mechanisms – which prevent access, or corruption to information and programs by other programs run prior to, after, or concurrently on a given machine – surfaced as computing facilities increased utilisation to recover the huge installation and operating costs, which often ran into millions of pounds.
Before multi-programmed operating systems, early operating systems – called resident monitors – were used to sequence jobs and, after each job, were responsible for preparing the machine for use by the next job.
Multi-programmed operating systems, first introduced by IBM in the late 1960s, optimised the use of a computer’s CPU resource by allowing different programs to execute while others were blocked waiting for I/O operations to complete.
With a multi-programmed computer system, contention for use of the computer’s CPU, memory and I/O resources meant that there was potential for one program to interfere – either intentionally, or not – with the operation of another program. This lead to the development of various memory management schemes, process scheduling techniques and privileged instructions – in particular, I/O requests – which could only be executed by the operating system.
Later, timesharing systems – which could support multiple concurrent interactive users and programs – were introduced to enable programmers to debug their programs during an interactive session rather than go through the lengthy process of submitting a batch job, then debugging the program using static core dumps.
One key feature of timesharing systems is the online file system. This allows both interactive users and batch processes running in the background to access data and code on demand. The ability for interactive users to access a file system in use by many other users and processes naturally lead to the introduction of access controls that protect both system, and user programs and information.
All modern central processing units (CPUs) and multitasking operating systems including Windows NT, Mac OS X, Unix, VMS and MVS support – to some extent, and with varying levels of complexity – all the main components of a secure and stable operating system:
- Operating system controlled I/O operations.
- File system access controls.
- Memory management hardware and software ensuring applications execute within their own address space, and that all communication between processes (inter-process communication – IPC) and I/O devices is managed by the operating system.
- A timer, to enable pre-emptive multitasking, which prevents individual processes from monopolising use of the CPU.
Computer Networks and Security
The computer network and the trend towards distributed computing in the form of client-server and Internet applications have had a significant impact on security.
Where business applications were originally provided by a single computer system, mainframe or minicomputer, security was a matter of securing a single system. In a networked client-server environment, a business application is only as secure as the various components upon which the whole application is based.
A typical Internet application – for example, an online banking application – might consist of the following major components:
- your own computer system – the client;
- a Windows NT based web server running Internet Information Services v5.0 – the presentation layer;
- some middleware to enable the web server to connect to the bank’s central computer; and
- the bank’s central computer – the server.
In this case, the security of the Bank’s information system hinges largely on the security of the middleware, which provides an interface between the relative insecurity of an Internet-connected environment, and the relative security of the Bank’s central computer.
The security of your information while you are using the banking application is, however, dependent upon the combination of security of your own computer, its operating system; the security of the web server plus any application components the developers have installed on the web server; and the security of the middleware used to allow the web server to connect to the Bank’s central computer.
The reason your information is at significantly higher risk while you are using the application is that in many cases, at each point along it’s journey from the Bank’s central computer to your screen, your confidential information exists somewhere in the memory of one of several computers in a raw, unencrypted form.
If the security of any one of the computers; your computer, the Bank’s web server, or the firewall between the web server and the central computer is seriously breached – assuming the middleware component does not encrypt its communications – then your information is at risk of disclosure.
The recent Code Red worm scare – widely reported by the press as bit of a “damp squib” – highlighted a serious vulnerability in Microsoft Index Server, which is installed on all new IIS installations. Microsoft Index Server is susceptible – unpatched – to accept a request from the Internet that causes a buffer overrun to occur and, ultimately, allows the attacker to execute arbitrary code on the server.
A similar buffer-overrun vulnerability was recently highlighted in the server component of Oracle 8i – the database management system used in many e-business applications. For this reason, despite the use of SSL to secure communications, it may be possible for any attacker to assume some level of control over the web server and install application components that could run in the same process (“in-process”) as the e-business applications themselves and therefore, have free “access” to information handled by the application.
In practice, we have shown that absolute security is very difficult to assure – particularly where systems connected to the Internet are concerned – and, whilst absolute security may be “desirable”, in many cases, high levels of security often have a severe impact on convenience and ease of use.
Security, then, is a trade-off between confidentiality, ease-of-use and convenience on the one hand and financial liability, or risk on the other.
An example of this type of trade off, which many of us make in our daily lives, is the almost universal use of credit cards to make payment for goods. The act of processing a credit card carries with it an element of risk – both to you as a customer of the Bank, to the Bank itself, and also to the Merchant handling the payment.
You, the customer are responsible for any payments you make using your card – or, if stolen, until the card is reported as such; the Bank is responsible for misuse of your card where the card has been reported stolen; and the Merchant is responsible for payments made using your card where the signature used to sign the receipt is not sufficiently similar to that recorded on the back of the card.
In using a credit card, you, the Bank, and the Merchant each accept the financial liability associated with making, and accepting credit card payments. Each feels that the benefits of using this payment method – in terms of convenience, or increased revenue – outweigh the inherent risks.
In the course of trying to mitigate risks, if either the Bank or the Merchant decided to introduce a requirement that card users must carry their passport with them to validate the purchase, this simple requirement may prevent customers using their credit card in cases where they would have liked. This relatively minor change would render the credit card inconvenient to use and would, most probably, reduce revenues seen by both the Bank and the Merchant.
To determine the appropriate balance between the security and the risk, we must have some idea of the value of the various assets – the business, the business system or goodwill – and the risks associated with holding, managing or servicing those assets.
Asset analysis is the process of identifying what, within an organisation needs to be protected; what are important assets; and what are the financial liabilities associated with these assets.
In the context of an information system, the “value” of an information resource both to your own business, and to the “owner” of the information in question – the data subject, in data protection terminology – can be different.
For example, it can be argued that – for the most part – the records held by a doctor’s surgery are, to the doctor, near worthless. To a patient, however, those same records may not have any associated monetary value, but if disclosed without their agreement, the information could have real financial, or otherwise tangible impact on that patient’s life. This kind of financial, or otherwise tangible impact on a customer – “consequential loss” – could ultimately result in legal claims for compensation.
In many cases, the value of an information resource itself is impossible to quantify. The value of an information system, however, can usually be expressed as some proportion of the business itself: in the event of a serious security breach, media reports could have a significant impact on goodwill; and in the event of an accidental loss of data, or some other “disaster”, the business may not be able to function for a period of time until that data is recovered and the computer systems restored.
Threat analysis is the process that helps us to understand what events we wish to protect assets against. Potential threats include: power failure, telecommunications failure, theft, vandalism, disgruntled employees, fire, hackers, technical failures – for example, a power supply, hard disk or processor heat sink fan fault – data corruption, or human error.
Having determined the threats we wish to protect the organization against, the objective of impact analysis is to quantify the likely impact of a given threat. Impact can be quantified on a scale ranging from 1 – negligible impact to 10 – which represents a catastrophic impact where the business could not survive.
The objective of risk analysis is to quantify the risks associated with a given threat occurring and is the product of some measure of the impact of an event and some measure of the likelihood of the event:
Risk = impact * likelihood
Likelihood can be quantified on a scale ranging from 1 to 10 where 1 represents highly unlikely, and 10 represents that the event is likely to occur daily, or perhaps even more frequently.
With the threat analysis and risk analysis stages complete, you can produce a graph. The graph – which plots threats against risk – is an explicit threat model, which allows you to understand precisely where you need to make provisions to improve the security of your business, your information and your customers.
Some threats – for example, war, flooding, power cuts – may be beyond our control but, in each case, we can still address some of them in part.
For example, in the case of a power cut, whilst we might be unable to ensure permanent uptime for our computers by installing a suitable generator; the use of an appropriate uninterruptible power supply (UPS) should give us more than enough time for our computer systems to be shut-down properly and therefore save us from further downtime and loss of data through data corruption.
The security policy is the result of provisions and decisions that are made in response to the threat model; and, if concisely documented, provides a good measure of, and a standard for, the security of your business and computer systems.
Over the next year, data protection legislation here in Jersey is to be updated, since first being passed as law in 1987.
Much has changed in the years since 1987 and, as a result, any update to this law is likely to accentuate the principle of data protection that charges “data controllers” – companies and individuals that handle personal information – with the responsibility to “ensure a level of security appropriate to the harm that might result from a breach of security”.
With such legislation in place, the need for some evidence – documented or otherwise – of having thought about, and then implemented suitable security provisions will, most probably, become mandatory in years to come.