![]() |
| ||||||||||||||
| E-commerce: It Is a Matter of Trust by Adrian McCullagh, Director of Electronic Commerce, Gadens Lawyers and William Caelli, Information Security Research Centre, Queensland University of Technology WHY IS TRUST IMPORTANT IN E-COMMERCE? Trust has traditionally been established as a result of combining each of the above components in varying proportions. The ultimate basis for trust in commerce has usually been the result of market and social forces with the assistance of legislative mechanisms. The US Department of Commerce, in its April 15,1998 Report, noted that some products and services have become critical to the general framework of society. Society has, in effect, become dependent upon such products and services. For example, western society has become entirely dependent upon the telephone, electricity and energy supply, motor transport and, for that matter, the refrigerator. In this regard, there are four phases that a product/service may enter in establishing market penetration. These phases in order of achievement are:
(a) Trust; Trust is the first phase in market penetrations. If a product does not have the necessary market trust then it will most likely fail commercially. Trust is based on an assessment of risk where confidence is based on familiarity. There will always be an element of risk within the confidence environment, but within the trust envirorunent there exists a presupposed position of risk. Even if a product does achieve the necessary confidence in the marketplace, that does not automatically result in a successful product, as the product may be out-marketed by a competing product, even if the competing product is technically inferior to the other product. Once confidence is achieved then society may become reliant on the product. Obviously not all products reach this level. There are many highly successful products that society is not dependent upon; e.g., "Coca Cola" and photocopiers. One could not say that society is reliant upon a soft drink no matter how enticing the product may be. An example of another product that society has become reliant upon is television. One may have friends who do not own a TV set, but that could not be said of the vast majority of the people in a Western society. Western society has from a news and general entertainment perspective become reliant upon television systems and the associated TV set. Hence, it is possible to identify that TV is a "reliant" product but has not reached the dependence phase. Some products do reach the dependence phase, in that a particular society becomes dependent upon the product. As stated above, such products include the telephone, the motor vehicle, electricity and, possibly, the refrigerator.
Figure 1. The general pattern of market penetration for products and services. The real importance from the electronic commerce viewpoint is that e-commerce will not reach its full potential unless the aspect of establishing trust in the e-commerce environment is achieved. Without the establishment of trust, e-commerce will not be a major economic environment. In this sense, a "trust threshold" must be rapidly attained. There is evidence that by late 1998, at least in the USA, and possibly Europe, such a threshold may have been reached.
A further requirement for the establishment of trust is that if appropriate "trust" policies are implemented today then the extra cost involved will be substantially recouped in the future. There is a growing increase in the prevalence of identity fraud in commercial transactions. Identity fraud involves the theft of a person's identity or a corporation's identity for fraudulent purposes. As will be proposed below, if government's implement appropriate policies as regards to preventing identity theft, then the escalation of fraud in the electronic commerce environment may be avoided. That is, a "gram of prevention" will truly be worth a "kilogram of cure." The long-term cost to society in not implementing preventative policies is greatly insignificant compared to the cost of policing and enforcing illegal activities. TECHNOLOGY TRUST Technology trust involves the classification or categorization of certain technologies and more importantly, the "artifact" products that "represent that technology". As stated above, Josang has proposed an extensive model in dealing with technology trust, but what Josang does not do is give any analysis concerning various classifications of trusted technology and artifacts. This concept of "technology trust' has even entered into the legal systems of some countries where, in relation to laws of evidence, a complex system, such as a computer system, must be assumed at law to function "correctly" unless contrary evidence is adduced. The onus of proof of unreliability, and thus untrustworthiness, lies with the challenger to such trust in a product or system. In law then, trust is deemed to exist in that a system is deemed to have performed according to its known specifications in a reliable and complete manner unless proven otherwise. At common law a party, in order to rely on the evidence generated by a scientific instrument, must prove to the satisfaction of the Court that the scientific instrument was working in accordance with its specifications at the time the relevant evidence was generated. That is, there is a presumption against the evidence generated by the instrument unless the instrument has become pervasive, in which case the Court can take judicial notice of the instrument. When this occurs, the scientific instrument no longer has to be proven to be accurate. It is for the person objecting to the evidence generated by the instrument to show that the particular scientific instrument in question was not working at the time the evidence was generated. This position has in part been enacted in the Queensland Evidence Act in section 95. Section 95, in part, provides that a computer will be deemed to have been working correctly if a responsible person provides a statement to the court certifying that the computer that generated the evidence was working properly. The shortfall of this provision is that the responsible person is not required to have had direct evidence of such proper working, but may rely upon hearsay evidence to support the certification statement. Such "hearsay evidence", moreover, may be even more indirect where a computer system, for example, is fully imported and its underlying hardware and software structures are unknown in the importing nation. In relation to computer and data network systems, the bases of electronic commerce business operations, trust in the underlying products and systems, and the associated technology used to create them, involves an analysis of the computer system from the perspective of trusting the outcome that has been created by a particular process. In this sense, trust has become intertwined with the problems of overall system security, although in this latter case definition of trustworthiness is extended to computer and data/telecommunications network security systems themselves. For example, in a general sense a trusted system may be one that can be relied upon to produce a known output from a known input in a reliable, consistent and complete manner, no more and no less, even if the mechanisms by which this is achieved are not known to the person or persons aiming to trust the system. In relation to computer and network system security, this definition has been generally narrowed to relate to a system that can be relied upon to enforce a known and defined security policy. If it is acknowledged that trust and system security are intimately linked then there is a compelling argwnent that, given the statement earlier in this paper that ordinary users of e-commerce / business systems may have no knowledge or understanding of the systems and their security that are involved, at each "end" of a transaction, the need for an external "umpire" to provide guidance on trustworthiness is paramount. This is the role of evaluation criteria, a concept that that is common in other arenas, e.g. pharmaceuticals, motor vehicles, air travel, etc. Early computer systems, generally categorised as first and second generation systems, were largely seen as being "batch" oriented; i.e., information processing tasks were performed in clusters or batches with the control of the data and associated processing programs usually vested in an organisational structure such as a "computer centre" or the like. In this sense, trust in the secure operation of the computer was vested in the people in charge of it through such activities as:
(a) data preparation (punching of data onto cards or paper tape), Security was thus effected through such techniques as personnel vetting (suitability to program or operate a computer) and perimeter controls (the "locked" computer centre). In turn this led to an "us" and "them" dichotomy, as the computer system became remote from its users. However, security, and thus some form of trust in the reliable outcomes of computer operations, could be clearly and readily developed. This changed in the early 1960s as 'time-sharing' computer systems usually effected by the connection of numbers of "teletype" terminals to a central mainframe or, later, a minicomputer system, became widespread. It should be noted that even in those days batch operations still continued while "time-sharing" co-existed with that mode of operation. These time-sharing operations led to a "breakdown" in the previously existing trust and security arrangements that had existed in the earlier days. In particular, there was now:
Time-sharing users could analyse their own problems and create their own programs directly in the system. They could then use such programs with data they entered themselves, or which had been "batched" earlier. So, in relation to the military use of such time-sharing systems in the U.S.A., a "task force" was set up in 1967 by the then "Advanced Research Projects Agency" (ARPA), the originator of the Internet as we know it today. The aims of this task force were: "... to study and recommend appropriate computer security safeguards that would protect classified in formation in multi-access, resource-sharing computer systems". The report was originally published in 1970 for the USA's "Office of the Director of Defense Research and Engineering" in a classified "confidential" form. It was declassified in 1975 and is known as the "Ware Report" (after the Chair of the investigating committee). The report identified five "types of vulnerabilities" which may be summarised as: Interestingly, this report does not link the work done with the concept of "trust" and, indeed, the word "trust" does not appear in the report. Only later, with the development of the USA's "Trusted Computer System Evaluation Criteria" (TCSEC) in 1983 does the word "trust" take a prominent role. However, all of the above parameters are still vital in the electronic commerce environment as use is made, not just of "resource-sharing" computers, but indeed of "commodity" level systems designed with little to no security criteria in mind at all. At the time, this report was influential in a number of projects that aimed at the creation of secure computer systems, including the "CAL" operating system project at the University of California and the MULTICS project at M.I.T. However, the next major step in overall computer system security was taken in the U.S.A. in 1983 with the publication of the TCSEC2, (also known as the 'Orange Book' from the colour of its covers) as mentioned above, which clearly cited in its introduction the previous work by Ware et al. The purpose of this report is set out as a set requirement, as follows:
These parameters take on new importance in the global electronic commerce and Internet environment while essentially remaining unchanged. In the Orange Book system security and "trust" are clearly linked and carefully analysed with an emphasis on general, commercial products and not just on military needs. Today, there are three basic classification and evaluation schemes for trusted computer systems in accepted usage, namely:
(a) TCSEC - Trusted Computing Security Evaluation Criteria (USA),
Other schemes do exist that claim a closer relationship with commercial, rather than military and governmental, requirements. These are not, however, considered here. The "Common Criteria" (CC), while prepared by a specific international defence-related group (the Common Criteria Project Sonsoring Organisation), is aimed at being an international standard which will eventually supplant the earlier TCSEC and ITSEC specifications. The standard will be set out by the International Standards Organisation as ISO 15408, entitled "Evaluation Criteria for Information Technology Security." All of these schemes rely upon a rating scheme to concisely express the security, and thus the trustworthiness, of the associated computer system, called a "target of evaluation" (TOE) in the ITSEC. In a trust sense, each of these schemes for evaluating computer systems identifies a set of required security functions that will build trust in the overall function of the system (assuming that system security is a major parameter in the human assessment of trustworthiness in a computer system), while then creating a measure of the confidence that can be placed in the dependable, consistent and complete operation of those security functions. With ITSEC and CC, security functions are clearly identified and separated from the evaluation of the reliability of those functions, while in the earlier TCSEC, they are not. The purpose of TCSEC (1983 / 1985) was to provide a standard for manufacturers, mainly of USA origin, as to what security features to build into their new and planned commercial products in order to provide widely available systems that satisfy trust requirements (with particular emphasis on preventing the disclosure of data) for sensitive applications, particularly in a military environment. However, in a commercial sense, within the global e-commerce environment, the safe and secure generation of the cryptographic key pair for digital signatures and the protection of signing mechanisms are of most importance within the security framework, as distinct from the earlier TCSEC confidentiality need. Thus a function that builds trust has been identified and its reliable and dependable operation, protected from accidental or deliberate tampering, for example, becomes an element in the overall trust and confidence building approach. While security functions and their reliability assessment are a basic requirement, there is an even more fundamental need in relation to overall system reliability, and thus trust. The original TCSEC nominated six fundamental requirements of any computer system that aims at attaining a level of trustworthiness. These were:
(a) Security Policy: There must be an explicit and well defined security policy enforced by the system; With these fundamental requirements in mind, ITSEC went further than TCSEC and separated security functions from their reliability and/or assessment. It defined seven evaluation levels to form a "trust hierarchy" in the reliable operation of the security features of an information system. Security functions were to be assessed or evaluated by human "evaluators" and, as a result of that evaluation, a simple measurement or tag was to be associated with the functions. The words used are important since they emphasize this building of trust by human users in overall information systems. The assurance levels are as follows:
- Informal description of detailed design exists - Evidence of functional testing to be evaluated - Configuration control system exists - Approved distribution process exists - Source code and/or schematics for hardware to be evaluated - Evidence of testing of these must be evaluated - Underlying formal model of security policy supporting the security target exists, and - Security enforcing functions, architectural design, detailed design specified in semi-formal style - Close correspondence between detailed design and software source code/ engineering hardware design drawings. - Security enforcing functions and architectural design must be specified formally - consistent with formal model of security policy. The use of these "technical" parameters for trust assessment have now found their way into law in at least one country, Germany. That country's "digital signature" law (October 1998) clearly specifies that for a digital signature to be acceptable in law it must be created within a component forming part of the system that is security (trust) evaluated to an "E2" level, with a higher "E4" level being required for so-called "trusted" third parties providing public key certificate and key management functions, as well as other services, to electronic commerce customers. This highlights the "mechanist" approach to building of trust. The existence of a high level of security evaluation in a national electronic commerce system is perceived as being able to raise the trust level for the populace in the use of such systems. It should be noted that this is not uncommon in other areas, such as motor vehicles, etc. The next section of this paper considers this in the light of concepts of "social trust." In other words, does mandatory legislation assist or not in raising trust in complex computer based systems, such that these systems may be readily accepted and used by the general populace? Or, is it more than a legal "fall-back" that is needed if problems occur in an electronic commerce structure, and that that "fall-back' position is clearly understood by those who would be involved in such disputes, e.g., lawyers, politicians, governments, etc.? Caelli in "B is for business : Mandatory Security Criteria and the OECD Guidelines for Information Systems Security" clearly states that mandatory security functionality should be made a requirement for all legal regimes in order to facilitate safe electronic commerce. One of the reasons for this mandatory position is that for "trust" to be established, the user must be in a position to accept the output based upon a clear and defined classification of the system. That is, by incorporating a trusted mechanism, namely TCSEC or ITSEC to an appropriate classification, the user and the trier of fact (the judicial system) will be in a better position to accept the functionality of the system that the user is utilising. In reality most users will not care or worry about such classification. It will be the financial institutions that will dictate the classification or standards and thus, in an indirect fashion, provide the assurances that a normal consumer needs to make a trust decision. This already occurs in Australia, for example, with the use of the widely accepted SA2805 series of standards for security enforcement in electronic funds transfer systems. Their usage has had a direct affect on user acceptance and trust in the EFTPOS (Electronic Funds Transfer at Point of Sale) through the very low incidence of legal action in the area. It can be maintained that such little litigation is the direct result of the use of "high-trust" computer based components, such as "PIN" pads, in the overall EFTPOS national network. The reason for this is that they are in a better position to evaluate and dictate the risk involved in the technology used in effecting electronic commerce. Users, and even whole nations, may have no knowledge of the underlying software and hardware used in an electronic commerce network. Such systems may be increasingly "closed," i.e., consumers, enterprises and nations will not have access to the underlying technology in the form of software source code, full description of all interfaces, system and application, hardware schematics and chip designs, etc. Trust decisions will thus have to be made on other grounds, as is the norm with more and more complex system structures. To bring this into perspective, when a person signs a contract on paper, the signer has absolute control over the signing mechanism. This is one example of absolute trust in practice. The complete document may be perused and its surroundings and context examined (no "carbon-paper" underneath the signing area, etc.) prior to affixation of a person's physical signature. When an electronic commerce "document" is being digitally "signed," the signer is not in the same position as when he/she signs using the traditional method. In the digital age the digital document is represented as a series of bits/electronic impulses within memory. There should be a full and complete legible representation of what resides in memory displayed on the screen for perusal. This is assuming that the representation on the screen corresponds with what is in memory. Hence there exists a perception of trust in the computer system. Just as a driver "trusts" the brakes on the car to work as specified, the electronic commerce "driver" must assume that the "document" to be signed has been thoroughly protected by the system in use and is not subject to deliberate or accidental tampering. But if the system has an insufficient trust classification, then the user may possess an unrealistic level of trust in the system. Trust is primarily based on risk analysis and risk analysis is based on an assessment of data. The lack of clear data can give a false positive as regards to trust within any particular situation. One could call this "trust in ignorance." Most users will not have the capability to properly assess the trust classification of a computer system and nor should they be required to undertake such a task. Such classification should be part of the role of government if commerce does not impose a sufficient self-regulatory position. It is doubtful that the computer industry would impose a sufficient self-regulatory trust system, as there is a substantial cost involved in developing trusted systems. But if E-commerce is to reach its full potential, and there being a mechanism in the e-commerce environment that is no less trustworthy than in the traditional commercial environment, then commerce should embrace the trusted system so as to take advantage of this relatively new commercial environment known as e-commerce. Such schemes already exist, in a legal sense. For example, in the car industry in Australia, all new cars must be manufactured and supplied against clear and enforceable safety and allied standards (the Motor Vehicle Standards Act 1989, (Commonwealth)). There is no reason why exactly the same requirement should not exist in the vital electronic commerce arena. A further influence on the establishment of trust within the electronic commerce environment is social trust. SOCIAL TRUST As stated above trust presupposes an awareness of risk. Excluding the effects of Artificial Intelligence, the concept of trust involves always only 2 people. The types of external information that has been available to them and how that information has been interpreted to them and by them during their lifetime influence their beliefs. Cultural influence therefore has a profound effect upon the amount of trust that people possess in any situation. An integral part of risk is the influence of "social trust" which is based on cultural values that are communicated in narrative form within society by elites. The social trust issue within the electronic commerce environment is that once a consumer is "on the Internet" and participating within an electronic commerce environment, the issue of cultural values will become important and as such social trust needs to be addressed. Giddens classifies this as the faceless imposition of the use of technology. It is proposed that social trust will have a substantial effect upon the extent of faceless transactions within any given society. In support of this proposition, it is proposed that the penetration of EFTPOS transactions around the world is a clear example. Fukuyama has identified that within the global community there are differing cultural societies that have developed substantially differing views on trust. According to Fukuyama the Chinese, French and Italian societies can be classified as low-trust societies. The indicia of these societies is that from a business point of view they do not trust people outside of their close knit family. 28 That is, analysis of the corporate structure within these cultures reveals that, in general terms, there is a close relationship between ownership and management. It is usually the founder and his/her siblings and children who control and operate the business. Some of these businesses are very large and very successful and are household names. In contrast to low-trust cultures are the high-trust cultures. Such cultures include the Japanese, Australians, Americans and British. The indicia of high-trust cultures are the development of multinational organisations where the ownership and the management of corporations have effectively been separated. That is, the shareholders of publicly listed corporations have placed their trust in third parties (the directors of the corporation) whom they normally do not personally know. The shareholders base their trust upon external information given to them and upon societal and legal pressures on the management of the corporation. It is suggested that within the global electronic commerce environment there exists various strata of societies that exist somewhere between high trust societies and low trust societies. The indicia of these categories are as yet to be determined but at the very least the structure of the corporate environment will, as Fukuyama has identified, form a critical aspect. It is suggested that the extent of the legal regime established within each cultural group will be an aspect that needs to be taken into account in the development of appropriate electronic commerce framework. That is, there will not be a uniform global approach to the legal infrastructure necessary to facilitate electronic commerce. Each participating society forming part of the global electronic commerce community will need to take account of their internal culture as well as addressing global interoperability. This is not an easy task, but currently most jurisdictions are failing to even take account of their internal cultural needs and have followed the general work of UNCITRAL or the work of the American Bar Association. BRAND TRUST
The role of the brand cannot be underestimated. Trust in a product by the general public will most likely be achieved through marketing. Even prehistoric evidence for this exists in the emergence of "Mailu" pottery as a monopoly in pottery manufacture in the southern Papua New Guinea area. Mailu islanders possessed the appropriate skills and transportation (via canoes) to enable their pottery to become highly demanded by the time of early European contact. Branding is a relatively recent widespread valued commercial phenomenon. In dealing with brand trust the fair trading rules governing consumer protection are relevant. One particular legislative rule that engenders brand trust is that in Australia a corporation must not, in trade and commerce, engage in conduct that is misleading or deceptive or likely to mislead or deceive. This legislative restriction engenders trust within the community by allowing the general public to have a cause of action in circumstances where sharp practices may be involved. The consumer protection laws do not lie just in protecting consumers against sharp practices but also provide protection for products that are not of merchantable quality or fit for the purpose. A further mechanism that is designed to engender trust is the product liability provisions of the Trade Practices Act. This legislative support assists in engendering brand trust within the consumer environment. In dealing with brand trust, the trust can be either personal or societal. For example, in the purchase of a personal computer and its operating software from a reputable manufacturer/dealer, the manufacturer and dealer may have a trade reputation of providing quality goods. There must be no reason to suspect any tampering with a product (the introduction of computer viruses, Trojan horses, bugs). With Commercial-Off-The-Shelf (COTS) products the consumer is generally not given sufficient information to reasonably assess the risk. Hence it is doubtful that the consumer is ever in a position to make reasonable trust decision and must depend more on "branding" to make choices. LEGAL TRUST Digital Signature schemes have appeared in open literature for over 20 years. In recent times these schemes have matured into commercially available and viable products. Digital signatures are being widely accepted as being a viable alternative to traditional signatures and the RSA algorithm has developed commercially into the most widely implemented digital signature technology. The RSA algorithm is not the only technology available commercially. The Digital Signature Algorithm and the relatively new Eliptical Curve technology are vying as being alternative technologies for digital signature implementations. The availability of new technology for digital signature implementations adds to the trust needed in this new environment. However, it does mean that systems must allow for the use of at least all three "commercial standards." In turn, this would increase risk of error or lapses in system security. The foundation of the security for the RSA algorithm is the difficulty involved in factoring large numbers. If there was developed a sufficiently rapid method to factor very large numbers, or the speed of computers or a network of computers was sufficient to factor large numbers in a short period of time, then the foundation and the trust reposed in the RSA algorithm would no longer exist. More recently, there have been developed a number of alternative "electronic signature" schemes in contrast to digital signature schemes. These electronic signature schemes have also in recent times developed into commercially available products that are now being taken seriously by commerce and government as an alternative to the traditional signature processes. They include biometric systems, such as thumb/finger print systems, signature dynamic systems as well as others. The ABA report first proposed digital signature legislation to take account of a growing move by the US government and a number of major US corporations to Digital Signature Technology. The State of Utah enacted Digital Signature legislation based on the ABA report. The ABA legislative model is generally referred to as full infrastructure legislation and has been criticised by Biddle Winn. Winn argues that it is far too early to enact full infrastructure legislation when no one knows what the market will look like. In furtherance of her argument Winn compares the current electronic commerce environment to the motor industry at the turn of the century. The internal combustion engine had only recently been invented, as had asphalt. Was that the appropriate time to enact law governing the development of motor ways when at that time there was no certainty that there was a market for that technology or what the ultimate market structure would look like? Despite this criticism there are a number of jurisdictions throughout the world that have enacted digital signature legislation based on the ABA report or are contemplating such legislations. The ABA styled legislation has not concerned itself with the concept of trust. It does define a "trustworthy system," but this term is used from the aspect of requiring the Certification Authority to have a trustworthy system. The legislation has failed to account for the trustworthy system required by the signatory or the verifier of a signature. In contrast to the Utah-styled legislation governing digital signatures, a number of other jurisdictions have approached the issue of digital signature legislation in substantially different ways. For example, the State of California has limited the scope of its legislation to transactions where one of the parties is a public entity. Furthermore, this legislation is drafted using technology-neutral language. Instead of specifying asymmetric encryption technology as the only acceptable method of affixing an electronic mark to a document, the legislation identifies the attributes that an electronic mark must possess before it will be accepted on the same footing as a traditional signature.s The attributes of an electronic signature under California Act S 16.5 are:
(a) it is unique to the person using it;
Within the paper-based environment, the trust mechanism that has been established has centered upon the signing mechanisms. This was the primary reason that the concept of witnessing was established. It was established so as to reduce the extent of identity fraud. That is, by requiring the signature to be witnessed by an independent third party, it was then:
(a) Not possible for the signatory to deny that he or she did not sign the document in question; and
Like the ABA Digital Signature Guideline-style legislation, the Californian Code does not deal with the concept of "trust" from the signing perspective. It appears that the concept of "trust" in the electronic commerce environment has been taken for granted and has not been properly investigated. There is nothing in any of the US-based legislation that directly concerns itself with the concept of trust from the signatory's perspective. The first paper dealing with the concept of trust from a legislative approach was by Winn. But Winn believes that because the TCSEC was originally developed for military objectives, it is unlikely that such objects will correspond to the objectives of a business conducting electronic commerce; the cost of implementing systems conforming to military standards might be prohibitive in any event. It is submitted that this approach and argument by Winn is shortsighted and incorrect. The US Government Accounting Office reported in 1998 that the issue of identity fraud will become a major problem within the electronic commerce environment. Identity fraud arises when a third party assumes the identity of another without authority and for an illegal purpose. The issue of identity fraud, and related identity theft, is relatively easy to identify within the electronic commerce environment based on COTS products and as such the issue of trust becomes of paramount importance. The issue of identity fraud can easily be perpetrated by the use of a virus. The Caligula virus is a clear example. This virus is a Visual Basic virus that is attached to Word documents. Its function is to search the hard-drive for the PGP secret key ring and once found the virus FTP's the file to the remote locality. The virus performs its functions without the knowledge of the owner of the PC. That is, the virus turns off certain display functions so that no dialogue boxes are displayed. On completion of the transfer the virus then checks the address book on the target PC sends a word document with the attached virus to 5 other people. Finally the virus destroys itself and in the process deletes any relevant entries in the sent mail folder. This is a clear violation of all security that should be incapable of being undertaken in a commercial environment. There are a number of technical solutions that could be implemented. The best method would be to have a trusted system in the first place, but this would take a substantial investment by the major computer software houses to implement. It would also take a substantial retraining of the general public. Therefore it is unlikely that the implementation of trusted systems is a viable option. The trusted system being a trusted operating system. Such systems are not generally user friendly, as they require a lot of discipline on the part of the user. The most likely scenario is the implementation of a trusted operating system for smart card technology. The deployment of smart cards has not been great and the functionality of the smart card could be restricted to the signing mechanism only. Restricting the functionality of the smart card also results in less discipline being required on the part of the user. The PC was not designed for commercial use, but commerce has taken to it with such vigor that it is now the primary equipment within any business. The issue of computer security has for the most part been limited to the purview of commerce, but with electronic commerce anyone who is connected to a network and uses their computer to effect purchases should now take seriously the lack of security in their PC. It is suggested that the general community does not have the expertise to understand the serious flaws in the PC. The use of simplistic smart cards with restricted functionality is a viable solution but it will, it is suggested, require Government intervention. It is the Government's role to better protect the general public from the lack of security mechanism in PC's when they intend to effect commercial transactions electronically. The only legislation to date that specifically deals with the signing mechanism and the concept of trust is the German legislation. Ordinance 17 of the German digital signature ordinance specifies that, for the generation of the pertinent cryptographic key pair, the component used must at least be classified "E4" under ITSEC and for the affixing of a digital signature, the component used must at least be "E2". The use of the term "component" implies that the whole system need not be so classified; only the component used to sign the electronic document must be so classified. At least one academic has questioned the commercial value of ordinance 17 because it appears that there is only one corporation that can supply the necessary components that will satisfy this high security criteria, and that corporation is a German corporation that markets certain smartcard technology. However, other subsystems do exist at this level of trust evaluation and, again, as for seat belts in cars, the aim of the legislation is to get manufacturers in unison to comply with security and safety requirements. The role of a "trusted path" for the signing mechanism is highly important and as such should be addressed within the digital signature regime. Without this attention to detail, it is technically possible for identity fraud to be easily effected. The legislative regime should require the implementation of appropriate technology mechanisms that will provide sufficient identification of the signatory as well as information integrity and signatory authentication. Most research as regards to signatory identification has concentrated upon the non-repudiation of the signatory who is alleged to have signed the relevant electronic document. But without the requirement of a trusted path as regards to the signing mechanism, it is always possible for the alleged signatory to deny having signed the electronic document. ...
|