In March 2005, I contributed an article to the newly re-designed ITnow magazine published by the British Computer Society ("BCS"). Entitled "SQL Injection", the article covered in brief some of the security risks associated with websites featuring dynamic content sourced from SQL and non-SQL databases.

Briefly, the developer of a software applications designed for consumption and use by actors outside of an organisation, actors that may be anonymous, must be alert to potential attacks on the integrity of the application: every interaction between a client and server must be scrutinized for validity.

The article concluded with a simplified methodology for the development of such applications: the constrain, reject, sanitize methodology:

  1. Constrain - validate the length, type, range and if necessary the format, of data passed as a parameter to the application;
  2. Reject - filter data known to be bad from any input passed to the application, possibly raising application exceptions on detection of bad data; and
  3. Sanitize - convert any remaining input that could potentially be malicious by, for example, escaping characters.

More recently, in an article entitled "Defending against cross-site scripting attacks" (IEEE Computer, March 2012), Shar and Tan discuss solutions to cross-site scripting ("XSS"). Whereas in SQL Injection the target for malicious code is the database sitting behind the website, XSS exploits poor input sanitization of a target website for the purpose of exploiting features of the client browser. Falling into three main categories: reflected XSS, stored XSS and document object model ("DOM") XSS, XSS risks range from denial of service ("DoS") attacks, to obtaining unauthorised access to user information stored in cookies to manipulation of the client DOM.

Shar and Tan identify four main defences: defensive coding practices, XSS testing, vulnerability checking and runtime attack prevention.

Defensive coding practices broadly relate to the constrain, reject and sanitize model introduced above expressed in terms of four input sanitization approaches:

  1. Replacement  - which replaces blacklisted characters with alternate characters;
  2. Removal - whichsimply removes blacklisted characters;
  3. Escaping - which identifies characters that have special semantics for client-side interpreters and modifies the input to remove those semantics; and
  4. Restriction - which attempts to limit inputs to known good values.

XSS testing may involve specification-based testing, code-based testing, fault-based testing.

Vulnerability testing involves identifying volunerabilities in server-side scripts through a combination of static and dynamic-analysis. Briefly, static analysis involves identifying vulnerable inputs and tracing the flow of that data throughout the application to databases or output procedures ("sinks"). Dynamic analysis involves analysing the constraints governing the inputs, generating test cases involving both valid inputs and inputs containing exploit code and comparing and differentiating the output.

Runtime attack prevention involves ...

 

Internet security

The common perception of Internet Security is that if a website uses the SSL protocol to provides a secure channel to transmit private details such as passwords and credit card numbers, then the web site itself is inherently secure.

In reality, however, SSL only provides assurance for the integrity and security of information in transit between a client and server.

Security defined

Security is, in general terms, the pro-active protection of assets against threats.

An asset may be a physical asset – as in the contents of your home; a logical asset – such as information key to your business; or some other, less tangible asset – such as the goodwill of a business as a going concern, or simply the ability to conduct business.

A threat is any event that has potential to remove, damage, destroy or otherwise access a physical, logical or less tangible asset.

Potential threats include, for example, human error, fraud, theft, fire, flooding, lightning, software faults, terrorism and malicious attack.

Security applied to information technology

In the context of business, computer systems have evolved – and continue to evolve – from simply being useful tools used in business such as word processors, and spreadsheets to encapsulate entire business processes through customer relationship, workflow and document management and, indeed, represent the business in Internet-based applications.

In many business areas; finance, travel, logistics and even some forms of retailing, the correct operation and continuity of computer systems is fundamentally important to the success – even survival – of the underlying business.

It is for this reason the question of security in Information Technology is becoming increasingly important: as systems are interconnected within corporate local and wide area networks and, more recently, interconnected through the Internet, the risks to computing assets – information, processes and the integrity of computer systems themselves – has multiplied.

A brief history of IT security

When computers were first invented, they were used for very specific roles; for example, to calculate ballistic firing tables for the military and code breaking.

At the time, security was essentially physical: computers were located in physically secure environments and relatively few people had access to them.

The computers themselves only had a small amount of volatile memory and did not have any backing storage, so the program and any data inputs required by the program were provided using an early input devices such as “plug board”, punched paper tape, or punched cards. Any output generated by the program was usually written to Teletype machines, punched paper tape or cards for subsequent processing.

On early machines, only one program was running at any given time and the program, data inputs and subsequent output were all “physical” objects where security could be assured physically: paper tapes and cards can be locked away and records that were no longer needed, destroyed easily.

The need for logical security mechanisms – which prevent access, or corruption to information and programs by other programs run prior to, after, or concurrently on a given machine – surfaced as computing facilities increased utilisation to recover the huge installation and operating costs, which often ran into millions of pounds.

Before multi-programmed operating systems, early operating systems – called resident monitors – were used to sequence jobs and, after each job, were responsible for preparing the machine for use by the next job.

Multi-programmed operating systems, first introduced by IBM in the late 1960s, optimised the use of a computer’s CPU resource by allowing different programs to execute while others were blocked waiting for I/O operations to complete.

With a multi-programmed computer system, contention for use of the computer’s CPU, memory and I/O resources meant that there was potential for one program to interfere – either intentionally, or not – with the operation of another program. This lead to the development of various memory management schemes, process scheduling techniques and privileged instructions – in particular, I/O requests – which could only be executed by the operating system.

Later, timesharing systems – which could support multiple concurrent interactive users and programs – were introduced to enable programmers to debug their programs during an interactive session rather than go through the lengthy process of submitting a batch job, then debugging the program using static core dumps.

One key feature of timesharing systems is the online file system. This allows both interactive users and batch processes running in the background to access data and code on demand. The ability for interactive users to access a file system in use by many other users and processes naturally lead to the introduction of access controls that protect both system, and user programs and information.

All modern central processing units (CPUs) and multitasking operating systems including Windows NT, Mac OS X, Unix, VMS and MVS support – to some extent, and with varying levels of complexity – all the main components of a secure and stable operating system:

  1. Operating system controlled I/O operations.
  2. File system access controls.
  3. Memory management hardware and software ensuring applications execute within their own address space, and that all communication between processes (inter-process communication – IPC) and I/O devices is managed by the operating system.
  4. A timer, to enable pre-emptive multitasking, which prevents individual processes from monopolising use of the CPU.

Computer Networks and Security

The computer network and the trend towards distributed computing in the form of client-server and Internet applications have had a significant impact on security.

Where business applications were originally provided by a single computer system, mainframe or minicomputer, security was a matter of securing a single system. In a networked client-server environment, a business application is only as secure as the various components upon which the whole application is based.

A typical Internet application – for example, an online banking application – might consist of the following major components:

  1. your own computer system – the client;
  2. a Windows NT based web server running Internet Information Services v5.0 – the presentation layer;
  3. some middleware to enable the web server to connect to the bank’s central computer; and
  4. the bank’s central computer – the server.

In this case, the security of the Bank’s information system hinges largely on the security of the middleware, which provides an interface between the relative insecurity of an Internet-connected environment, and the relative security of the Bank’s central computer.

The security of your information while you are using the banking application is, however, dependent upon the combination of security of your own computer, its operating system; the security of the web server plus any application components the developers have installed on the web server; and the security of the middleware used to allow the web server to connect to the Bank’s central computer.

The reason your information is at significantly higher risk while you are using the application is that in many cases, at each point along it’s journey from the Bank’s central computer to your screen, your confidential information exists somewhere in the memory of one of several computers in a raw, unencrypted form.

If the security of any one of the computers; your computer, the Bank’s web server, or the firewall between the web server and the central computer is seriously breached – assuming the middleware component does not encrypt its communications – then your information is at risk of disclosure.

The recent Code Red worm scare – widely reported by the press as bit of a “damp squib” – highlighted a serious vulnerability in Microsoft Index Server, which is installed on all new IIS installations. Microsoft Index Server is susceptible – unpatched – to accept a request from the Internet that causes a buffer overrun to occur and, ultimately, allows the attacker to execute arbitrary code on the server.

A similar buffer-overrun vulnerability was recently highlighted in the server component of Oracle 8i – the database management system used in many e-business applications. For this reason, despite the use of SSL to secure communications, it may be possible for any attacker to assume some level of control over the web server and install application components that could run in the same process (“in-process”) as the e-business applications themselves and therefore, have free “access” to information handled by the application.

Applied security

In practice, we have shown that absolute security is very difficult to assure – particularly where systems connected to the Internet are concerned – and, whilst absolute security may be “desirable”, in many cases, high levels of security often have a severe impact on convenience and ease of use.

Security, then, is a trade-off between confidentiality, ease-of-use and convenience on the one hand and financial liability, or risk on the other.

An example of this type of trade off, which many of us make in our daily lives, is the almost universal use of credit cards to make payment for goods. The act of processing a credit card carries with it an element of risk – both to you as a customer of the Bank, to the Bank itself, and also to the Merchant handling the payment.

You, the customer are responsible for any payments you make using your card – or, if stolen, until the card is reported as such; the Bank is responsible for misuse of your card where the card has been reported stolen; and the Merchant is responsible for payments made using your card where the signature used to sign the receipt is not sufficiently similar to that recorded on the back of the card.

In using a credit card, you, the Bank, and the Merchant each accept the financial liability associated with making, and accepting credit card payments. Each feels that the benefits of using this payment method – in terms of convenience, or increased revenue – outweigh the inherent risks.

In the course of trying to mitigate risks, if either the Bank or the Merchant decided to introduce a requirement that card users must carry their passport with them to validate the purchase, this simple requirement may prevent customers using their credit card in cases where they would have liked. This relatively minor change would render the credit card inconvenient to use and would, most probably, reduce revenues seen by both the Bank and the Merchant.

To determine the appropriate balance between the security and the risk, we must have some idea of the value of the various assets – the business, the business system or goodwill – and the risks associated with holding, managing or servicing those assets.

Asset analysis

Asset analysis is the process of identifying what, within an organisation needs to be protected; what are important assets; and what are the financial liabilities associated with these assets.

In the context of an information system, the “value” of an information resource both to your own business, and to the “owner” of the information in question – the data subject, in data protection terminology – can be different.

For example, it can be argued that – for the most part – the records held by a doctor’s surgery are, to the doctor, near worthless. To a patient, however, those same records may not have any associated monetary value, but if disclosed without their agreement, the information could have real financial, or otherwise tangible impact on that patient’s life. This kind of financial, or otherwise tangible impact on a customer – “consequential loss” – could ultimately result in legal claims for compensation.

In many cases, the value of an information resource itself is impossible to quantify. The value of an information system, however, can usually be expressed as some proportion of the business itself: in the event of a serious security breach, media reports could have a significant impact on goodwill; and in the event of an accidental loss of data, or some other “disaster”, the business may not be able to function for a period of time until that data is recovered and the computer systems restored.

Threat analysis

Threat analysis is the process that helps us to understand what events we wish to protect assets against. Potential threats include: power failure, telecommunications failure, theft, vandalism, disgruntled employees, fire, hackers, technical failures – for example, a power supply, hard disk or processor heat sink fan fault – data corruption, or human error.

Impact analysis

Having determined the threats we wish to protect the organization against, the objective of impact analysis is to quantify the likely impact of a given threat. Impact can be quantified on a scale ranging from 1 – negligible impact to 10 – which represents a catastrophic impact where the business could not survive.

Risk analysis

The objective of risk analysis is to quantify the risks associated with a given threat occurring and is the product of some measure of the impact of an event and some measure of the likelihood of the event:

Risk = impact * likelihood

Likelihood can be quantified on a scale ranging from 1 to 10 where 1 represents highly unlikely, and 10 represents that the event is likely to occur daily, or perhaps even more frequently.

Threat model

With the threat analysis and risk analysis stages complete, you can produce a graph. The graph – which plots threats against risk – is an explicit threat model, which allows you to understand precisely where you need to make provisions to improve the security of your business, your information and your customers.

Some threats – for example, war, flooding, power cuts – may be beyond our control but, in each case, we can still address some of them in part.

For example, in the case of a power cut, whilst we might be unable to ensure permanent uptime for our computers by installing a suitable generator; the use of an appropriate uninterruptible power supply (UPS) should give us more than enough time for our computer systems to be shut-down properly and therefore save us from further downtime and loss of data through data corruption.

Security policy

The security policy is the result of provisions and decisions that are made in response to the threat model; and, if concisely documented, provides a good measure of, and a standard for, the security of your business and computer systems.

Conclusion

Over the next year, data protection legislation here in Jersey is to be updated, since first being passed as law in 1987.

Much has changed in the years since 1987 and, as a result, any update to this law is likely to accentuate the principle of data protection that charges “data controllers” – companies and individuals that handle personal information – with the responsibility to “ensure a level of security appropriate to the harm that might result from a breach of security”.

With such legislation in place, the need for some evidence – documented or otherwise – of having thought about, and then implemented suitable security provisions will, most probably, become mandatory in years to come.

Public Key Encryption and Internet Security

Public key encryption

Public key encryption, sometimes called asymmetric encryption was invented in the late 1970s in response to the problem of sharing information between parties without the possibility of the keys used for encrypting the information being compromised.

In private key encryption, or symmetric encryption both the sender and the recipient of encrypted information have to be in possession of the private key; the sender needs it to encrypt information at source; and the recipient needs the key to decrypt, or unlock the information upon receipt.

The major problem with private key encryption is often not with the strength of the encryption keys themselves, but in the exchange of these private keys between the sender and recipient.

If the private key should become compromised, then anybody in possession of the private key and the algorithm used to decrypt the private key can read any and all communications between the sender and recipient without detection.

For example, in World War 2, Station X, the UK Government’s communications intercept centre established at Bletchley Park in 1939 concentrated on compromising private keys used to encrypt German messages. The keys were often sent just before the start of the message.

The schemes used by the German Navy – Naval Enigma – caused Station X some of the greatest problems. In addition to making the encryption key “stronger” by adding a fourth wheel to the usual 3-wheel Enigma encryption machine, the German Navy also used code books to remove the need for full encryption keys to be sent as part of a message. Code books were printed in feint water-soluble ink such that, in the event a ship was captured by allies, the radio operator only had to throw the code book into water for the book to become useless – and for the Naval encryption system to remain secure.

It was only when one of these code books and a Naval Enigma machine were captured by the British in 1941 that the allies were able to listen-in on German Naval communications and anticipate certain attacks.

Public key encryption solves the problems inherent in private key encryption by having two mathematically related keys; a public key and a complimentary private key. The public key is used to encrypt a message and, upon receipt, the receiver uses their private key to decrypt the message. It is extremely difficult, if not impossible, to determine the value of one key from the other.

Secure applications

Before the World Wide Web became consumer-oriented, Internet security was application specific – in other words if an organisation wanted to transfer information securely, or authenticate information over the Internet, it had to implement its own encryption scheme, use an existing encryption tool such as PGP (pretty good privacy), or use a library that implements a well-known encryption system such as the Data Encryption Standard, DES and RC4.

The development of Secure Sockets Layer, SSL – the de-facto standard for secure web communications – dates back to late 1993 when the National Center for Supercomputing, NCSA released its web browser, Mosaic and web server httpd. Mosaic and httpd were the first implementations of the HTTP/1.0 standard and incorporated support for fill-in forms and server-side scripting through the common gateway interface, CGI.

As more sophisticated applications were developed using Mosaic and httpd, various groups began to develop secure protocols for both information transferred from the server to the browser; and for information submitted through the browser to the server.

In later versions of Mosaic and httpd, hooks to the program PGP were introduced to support the Private Enhanced Mail (PEM) standard, and an experimental version of Common Client Interface (CCI) – the client-side equivalent of the Common Gateway Interface (CGI) – was also used in combination with PGP to ensure security of both client-server and server-client communications.

At the around the same time, an American company called Enterprise Integration Technologies, EIT developed S-HTTP – a superset of HTTP that allowed messages to be secured in a variety ways including encryption and digital signatures. In April 1994, the National Centre for Supercomputing and two companies, RSA – the owners of the RSA encryption system – and EIT announced their intention to develop a secure version of Mosaic to enable “buyers and sellers to meet spontaneously and transact business”.

Also in April 1994, Netscape began developing its web browser for the mass market. With a clear understanding of the growth of the Internet – at the time, roughly 25 million users – Netscape understood the need for secure Internet transactions to facilitate electronic commerce and began designing Secure Sockets Layer, SSL as an open, secure communications protocol.

SSL differs from other secure protocols in that it was developed as a transport-level protocol. This means that rather than providing security for a specific application such as a web browser, SSL was designed as a layer that sits between an application and the network protocol TCP/IP and secures all communications between the application client and application server – without the software developer having to consider issues such as, for example, how to negotiate the encryption system in use, and how to exchange keys.

SSL

In late 1994, the first implementation of SSL, version 2.0, was released. SSL 2.0 laid the foundation for a good, general purpose secure application transport protocol but its use in applications involving substantial risk, or funds transfer was limited due to a number of shortcomings.

In response to these shortcomings Microsoft, in association with Visa, released its own encryption layer, called Private Communications Technology, or PCT. PCT was meant to be an alternative to, and an enhanced version of SSL 2.0 and was submitted as a candidate standard to the Internet Engineering Task Form, IETF – the body responsible for developing Internet standards.

Netscape released SSL 3.0 in late 1995, incorporating features from both SSL 2.0 and PCT and since then, the IETF has assumed responsibility for SSL, renaming it Transport Layer Security, or TLS to avoid showing a preference to either company. SSL 3.0 is currently the industry standard for secure communications and provides support for all three major functions essential for secure electronic transactions: mutual authentication, data encryption and data integrity.

Mutual authentication

Mutual authentication is the term given to the process of establishing trust between the client and server through digital certificates. A digital certificate is a package of information issued by a trusted third party called a certification authority, CA.

The certification authority signs this package digitally using its own private key. Using the certification authority’s own public key, the client application – in most cases, a web-browser – can confirm the source of the certificate and, provided the source is reputable, you have confidence that the server you are sending your private details to is the intended recipient.

Similarly, many organisations such as the UK’s Inland Revenue are beginning to implement their own public key infrastructures (PKI) to enable client authentication. Client authentication enables an organisation you deal with to establish confidence, in addition to the usual username / password combination, that it can accept information from, or send information to you that might be confidential, or legally binding.

Data encryption

Data encryption is the process of obscuring information sent between the client and server during a secure electronic conversation. Data encryption ensures that anybody who manages to intercept, or listen-in on a secure conversation is unable to determine precisely what is being transmitted.

SSL provides strong data encryption between the client and server through private key, or symmetric encryption using of a pair of session keys: one for each direction of data communication – client-server, and server-client. The session keys, which typically only last for a single encrypted conversation are generated during what is called a key-exchange handshake which takes place between the client and server before any private information is transmitted.

For the client and server to ensure privacy, SSL must implement a secure handshake protocol that protects the private key during transit from the client to the server at the beginning of the session. This is where public key encryption comes in.

After the client and server have negotiated the public key encryption and compression schemes for use during the session, the client shares with the server a “pre-master secret”. This is a 48-byte value generated by the client using a secure random number generator, which is then public key encrypted using the server’s public key. The client then sends the encrypted pre-master secret to the server.

Upon receipt, the server uses its private key to obtain the 48-byte value generated by the client from the encrypted message. This 48-byte value is then processed using a “one-way function” where for any given input; the output is always the same but where the original input value cannot be derived from the output.

The new value generated by the one-way function, within the context of SSL, is called a “master secret” from which both the client and the server can generate four more encryption keys needed for secure data communication: the client-server encryption key; the server-client encryption key; and two respective client and server message authentication codes, called MAC secrets, used for checking that a secure message has not been tampered with during transmission.

The keys are derived from the master secret – rather than using the master secret itself – to ensure that no information that could be used to derive the master secret is ever transmitted over the network: if the master secret is ever compromised then unlocking all communications between the client and server is a simple process.

Data integrity

Data Integrity, in the context of an electronic conversation between a sender and a recipient, is the ability for the recipient of information to:

  • test the accuracy of information transmitted by the sender to ensure that changes have not been made to the message during transit, either intentionally or not;
  • establish confidence that the information was actually sent by the sender and not by a third party; and
  • determine whether information has been delayed in transit, or “replayed” from a earlier transaction by a third party.

SSL ensures data integrity using message authentication codes. Information transmitted over an SSL connection is broken up into fragments. For each fragment, a message authentication code, or MAC is generated by a one-way function in combination with the appropriate MAC secret for client-server, or server-client communication generated during the initial key-exchange handshake.

The MAC is then sent along with the encrypted data fragment to the receiver. The receiver decrypts the data fragment and generates its own MAC using the appropriate client-server/server-client MAC secret and then compares the MAC it has just calculated with the MAC generated by the sender. If the two MACs are the same, then the recipient can be confident that the message has not been corrupted or modified during transit and has been sent by the sender and not a third party.

The problem of identifying whether information has been delayed in transit, or replayed from an earlier transaction is resolved in two ways: first, the use of session-keys means that, for each discrete electronic conversation, different encryption keys are used for each so that messages from one conversation are rendered useless in a later conversation; and second, sequence numbers encoded in the MACs sent with each fragment ensure that if any part of a message is delayed, or replayed during the same conversation then the connection is terminated, alerting both the recipient and sender to possible interception.

Conclusion

Using public and private key encryption schemes in combination, SSL, or TLS as it is now called represents a solid foundation for the development of secure applications on the Internet and is a key enabling technology for e-business applications beyond simply acquiring username / password combinations and credit card details.

Subcategories