MD5 – The hash algorithm is now Broken!!!

Posted by CRYPTOcrat | Encryption | Monday 5 January 2009 7:16 pm

Author:

This blog is authored by Aniruddha Shrotri a fellow CRYPTOcrat.  Aniruddha is the CTO and Co-founder of E-Lock. E-Lock specializes in Digital Signature & Electronic Signature Software Solutions. Aniruddha has been active in the PKI domain for a long time and in a great position to write a note about what this recent NEWS about MD5 means.

You can find more information about Aniruddha on his LinkedIn Profile.

Abstract

Here is wishing all the CRYPTOcratians a very Happy and Secure New Year! But I am afraid the New Year has brought with it a disturbing piece of news – the hash algorithm MD5 is broken – not just in theory but in practice too.

Here is a very recent and good article http://news.cnet.com/8301-1009_3-10129693-83.html that describes how the weakness in MD5 was successfully exploited to create a fake website with HTTPS that would pass the browser test. Since 2004 it has been known that a weakness existed in MD5 but this is the first time this weakness has been exploited to create a practical live demonstration.

Issues that are raised by this demo

Clearly MD5 is broken and the consequences of this can be quite grave. Here are a few points to ponder over this development:

  • The weakness was demonstrated by faking a certificate so that an SSL enabled fake website can successfully spoof a genuine website. This will mean that when people browse and they see the URL to contain HTTPS and/or a lock icon in the browser status bar, they will no longer be 100% assured that they are looking at a genuine website. If this assurance is gone, then how else is one supposed to verify whether one is dealing with a genuine or a fake website?
  • The resources required to create this exploit were not enormously large – 200 Playstations 3 for 2 weeks. This is certainly not beyond the means of a determined hacker. As the article mentions, this much computing power could probably be acquired for $1500 from Amazon! Consider this effort against the pay-off that a rogue hacker might have in terms of harvesting identity information of hundreds of thousand unsuspecting customers of important financial institutions like banks.
  • In faking a website, another issue that the hacker has to tackle is how to get victims to connect to the rogue website as opposed to the genuine website. There are a number of phishing attacks possible, starting from sending genuine looking fake emails with links that apparently go to the genuine website but the actually URL they have is of the rogue website. More advanced techniques include poisoning the DNS Cache of some common public DNS servers. This is very insidious as there is no genuine way in the hell the victim can make out that he or she is going to the fake website – even the correct URL will take him to the wrong website! Thirdly, the hackers can setup public Wi-fi access points and provide “open” and “free” network access to users. Users of such free Internet won’t even know that some of the links they are visiting are actually spoofs. In all, it is not very difficult to make unsuspecting victims go to a fake website (phishing attacks) and now it turns out it is not difficult either to give a false sense of comfort to victims by enabling the phished web sites with SSL/HTTPS.
  • Currently this attack utilizes weakness in MD5. Remember sometime back it was shown that SHA-1 too is weak (though nowhere near as weak as MD5). It is only a matter of time that similar attack becomes possible on SHA-1 also. This is a grave scenario as the only two hash algorithms in use in practice are MD5 and SHA-1. The new replacement for SHA-1, i.e., SHA-2 and SHA-3 is still a long way out. SHA-3 candidates challenge has only recently been thrown open. And practical implementations of them will take even longer to come.
  • This attack is not only about faking websites – it applies equally to Digital Signature applications as well. Basically, breaking of MD5 means being able to create another set of data whose MD5 hash is identical to the MD5 hash of some given data. This would mean that if I get a digitally signed document, I can create another document whose MD5 has matches with the MD5 hash of the original document and then I can simply transplant the digital signature from the original document to the new document. I can now take this new document with the proper digital signature of the original signer and the signer would never be able to repudiate having signed this new fake document. Imagine changing someone’s will, or changing the terms of a contract this way!
  • One might argue that it may not be worthwhile for an attacker to spend great resources to fake a single document (remember, MD5 hash for different documents would be almost unique, so in order to fake another document, the whole exercise of faking would have to be done again). This may not be true depending on the value of the document, but this is not even required. As demonstrated in this attack referred to in the above link, it is sufficient to create a fake certificate. The fake certificate can then be used to sign any number of documents without having to spend large resources – the resources would be required only to create fake certificate.
  • If a hacker fakes the certificate of an end user, he / she will have to spend again a similar amount of resources to fake each end user’s certificate he / she wants. However, if a CA’s certificate is faked, it can be used to issue fake certificates for any number of users without spending big resources for each end user certificate.

How was the fake certificate created?

A link is given in the article referred to above to a fake SSL enabled site, which the interested parties can see by clicking here: https://i.broke.the.internet.and.all.i.got.was.this.t-shirt.phreedom.org/. If you observe the SSL certificate chain, it seems the middle certificate in the chain is the fake one. That is, the root CA ”Equifax Secure Global eBusiness CA-1” has actually never issued a certificate to “MD5 Collisions Inc. (http://phreedom.org/md5)”, but if you look at the certificate chain, you would be lead to believe that it has. This is the fake certificate.

Question is how it was created. Obviously, they constructed the certificate to contain whatever they want and then transplanted the Equifax root CA’s signature from some other legitimate certificate to this fake one. This of course assumes that the MD5 hash of the to-be-signed part of the both the certificates was identical. But getting the MD5 hash of some exact data you want (the new fake certificate content) to be identical to some pre-determined value (the MD5 hash of the original certificate from which the signature was transplanted) is nearly impossible. So, it would be required to keep adding some extra data to the data you want and keep trying to match the hash. In the case of a certificate, such extra data can easily be put in a field called “certificate extension” which is pretty much allowed to contain anything including your cat’s picture. As long as the extension is not marked “critical”, the receivers are supposed to ignore any extension they do not understand.

So, I had expected to see some unknown extension in the fake certificate with potentially large random looking data, so that when this data was taken together with the rest of the certificate content, the hash would match some pre-decided hash. To my surprise, I found no such extension – all the information in the certificate is standard and well recognized by all browsers.

There is a good and detailed explanation of how the fake certificate was created if you follow the links given on the website. It is worth reading if you are interested in the details and it is explained in terms not far from what laymen would understand. Still it is very intricate in details. To summarize: they did not transplant signature from any old certificate that the root CA had signed – they specially constructed a real certificate, which they got the root CA to sign. They required the root CA to issue certificates with predictable validity period and serial number. They used the chosen prefix collision attack in which some prefix of the both the colliding data can be chosen by the attacker – this alone would probably rule out transplanting the signature from any old certificate onto the fake certificate. The major portion of collision block was absorbed in the RSA Public key modulus. Some “tumor” (as they call it) is visible in the fake certificate in the form of strange content of the “Netscape Comment” extension in the certificate.

What can be done

Having scared you enough with different things the hackers can do to make your life miserable, lets look into what can be done to alleviate the pain a bit:

  • All the CAs and all digital signature softwares should stop using MD5 as algorithm.
  • All CAs should revoke any certificates that used MD5 and re-issue certificates wih SHA1 or better algorithms.Note that this and the previous step actually may not help much. The reason is that the hackers wont be using the CA-issued certificate directly anyway – they will create a new fake certificate that LOOKS like being issued by the said CA and transplant the signature from a previously issued certificate that used MD5. Even though this particular attack required the team to create a new valid certificate that “looked like” the fake certificate they were going to create, it is conceivable that with some improvements, for transplanting, the hacker would just need a certificate issued by the CA that uses MD5 – does not matter whether it is expired, revoked or live.
  • The browsers should start issuing at least a warning when they find any certificate that uses MD5 in a trust chain during an SSL transaction.Given both the points above, it is a wonder that companies like Verisign and Microsoft have chosen to downplay this attack.
  • The digital signature softwares should all start doing the same during any signature verification – flag a warning at least if it detects use of MD5 in the signature or in any certificate up the trust chain.
  • Even though in this instance there was no “unknown” extension in the certificate, it would be prudent for the browsers and digital signature verification software to flag a warning if they find any certificate in the chain to contain an unknown extension (even though marked “not critical”) which contain suspect looking data (i.e., contains ASN Octet String or Bit String data types).

Please feel free to send in your inputs using the comments section below. Additionally you could also use our group’s discussion forum link to send in your comments for Aniruddha.

    Simultaneous Encryption using Linear Block Channel Coding Part2

    Posted by CRYPTOcrat | Encryption | Wednesday 29 October 2008 7:24 am

    In continuation from Part 1 of this article here.

    Linear Block Codes: Generator Matrix and Systematic Codes

    LBC use the concept of adding redundancy in the form of Parity bits so as to give information about error correction.

    A vector matrix equation is:

    U = mG (1)

    (2)

    To form a systematic code the generator matrix G can be modified in terms of sub matrices P and Ik as follows:

    U = mG = m[P|Ik ]…(3)

    (4)

    Error Detection and the Parity-Check Matrix

    HT is an n × (n - k) matrix (whose rows are the columns of H). To fulfill the orthogonality requirements of a systematic code, the H matrix can be written as H = [In-k | PT ], where In-k represents an (n - k) × (n - k) identity sub matrix and P represents the parity sub matrix defined. Since by this definition of H, we see that GHT = 0, and since each U is a linear combination of the rows of G, then any vector r is a code word generated by the matrix G, if and only if

    rHT = 0…. (5)

    Equation (5) is the basis for verifying whether a received vector r is a valid code word.

    Towards Error Correction: Syndrome Testing

    Following (5), we define a syndrome vector S of r as

    S = rHT. …(6)

    Error Correction:

    The Syndrome actually relates us to the actual Error as follows:

    S = rHT = (Ui + ej ) HT = UiHT + ejHT…(7).

    From this, the error pattern can be decided.

    The Scheme for Simultaneous Channel Coding and Encryption:

    Key Matrix:

    We use a Key Matrix: L of dimensions (n by n)

    The Key Matrix is a special a Square Matrix for which Inverse exists.

    Hence, find L-1 such that [L][ L-1]= [I]…. (8 )

    We call the matrix L-1 as the Multiplicative Inverse for the *[L] operation.

    Encrypted Code-Word Matrix U”:

    The Code-word matrix U is converted into the encrypted matrix U” by

    [U]*[L]= [U''](9)

    The received vetor will have added error as given below:

    The source codes used for communication will all be from the [U''] matrix which cannot be decoded until the n*n matrx [L] is known.

    Z= U” + e … (10)

    For decoding the code-word to the right vector, we use the multiplicative inverse at the receiving end.

    Hence,

    S = rHT = (Ui*L+ ej)([L -1HT ) ...(11)

    =  (Ui* L* L-1 *HT )+( ejHT L-1)

    = (Ui*HT )+( ejHT L-1)

    S = ( ejHT L-1) ...(12)

    Hence we modify the syndrome table from ejHT to ( ejHT L-1).

    With this minor change the whole code word is retrieved along with error pattern.

    <!– /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-parent:”"; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:”Times New Roman”; mso-fareast-font-family:”Times New Roman”;} @page Section1 {size:8.5in 11.0in; margin:1.0in 1.25in 1.0in 1.25in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.Section1 {page:Section1;} –>

    Example: (6, 3) Linear Block Code

    Table 2 describes a code word-to-message assignment for a (6, 3) code, (Sklar [3])

    The Generator Matrix for (6,3) code:

    [G]=

    (13)

    The Message Matrix [M] for (6,3) code:

    [M]=

    (14)

    The Key Matrix for this (6,3)code:

    [L]=

    (15)

    Calculating : [U]=[M][G]:

    [U]=

    (16)

    Calculating the New Encrypted Code Word Matrix U”

    [U''] = [U]*[L] =

    (17)

    Note that [U''] is quite different from [U], and cannot be traced back unless one has key matrix [L].

    Retrieval of [U] from [U'']

    Getting [U]= [U'']*[L -1] =

    (18)

    The rest of decoding can now be done by using the standard decoder and the Syndromes may as well be interpreted for error patterns with minor changes as explained in the expression (12).

    Conclusions:

    Use of Channel Coding for Simultaneous Encryption purpose is a better choice as compared to the use of Source coding because of the Resdundancy increase found in Channel Coding and the possibilities for having higher key lengths for encryption.

    We here present a novel way to simultaneously encrypt the message while channel coding is performed.The method proposed does not alter the noprmal performance of the Channel coding by linear block codes, and infuses encryption as an added advantage. The method also does not impose any extra overheads as the matrix G” can be pre-calculated using the multiplication [G]*[L]. Similarly at the receiving end, the matrix HT ” can be calculated beforehand by [HT]*[ L -1].

    It shall be noted that the method does not make use of the standard approach of exploiting the freedom in coding method. It makes use of the concept of multiplicative inverse for removing encryption layer which is imposed by a prior multiplication.

    References

    [1] Chung-E Wang, “Cryptography in Data Compression“, CodeBreakers

    Journal Vol. 2, No. 3 (2005).

    [2] Chung-E Wang, “Simultaneous Data Compression and Encryption“,

    Security and Management 2003: 558-563.

    [3] Bernard Sklar, “The ABCs of Linear Block Codes”, IEEE Signal

    Processing Magazien, July 2004.

    [4] John G. Proakis, “Digital Communication“, Mc-Graw Hill Companies,

    (2007)

    [5] B. Sklar, Digital Communications: Fundamentals and Applications, 2nd

    ed.Englewood Cliffs, NJ: Prentice-Hall Inc., 2001.

    [6] T. Kasami, Klove, and S. Lin, “Linear block codes for error

    detection,”IEEE Trans. Inform. Theory, vol. IT-29, pp. 131-136, Jan.

    1983.

    [7] W.W. Peterson and E.J. Weldon, Error Correcting Codes. Cambridge,

    MA:MIT Press, 1972.

    [8] Online Matrix Multiplier: http://wims.unice.fr/wims/wims.cgi

    Simultaneous Encryption using Linear Block Channel Coding Part1

    Posted by CRYPTOcrat | Encryption | Sunday 5 October 2008 6:38 pm

    This article is authored by Rohit Pandharkar a fellow CRYPTOcrat.  Rohit is currently pursuing his under graduate studies at College of Engineering Pune. He has great interests in Cryptography, Mathematics and at this early stage has already published couple of research papers related to Cryptography.

    You can find more information about Rohit on his LI Profile.

    Introduction

    Channel coding is an error-control method used for providing robust data transmission through imperfect channels by adding redundancy to the data. Two vital classes of such coding techniques are: block and convolutional. In this article, we will be focusing on the Linear Block Coding and its use for encryption in this work.

    The attempts at simultaneous source coding and encryption mainly involved exploitation of the freedom involved in the compression algorithms. Chung-E Wang [1], [2] has worked extensively in the area of Cryptography in data Compression, and has delved into the possibilities of exploiting the freedom in the source coding algorithmic for the sake of encryption.

    Need to use Channel Coding for Encryption:

    We opt for channel coding as a better point of infusing encryption as compared to the source coding because of the following reasons:

    Limitations of infusing secrecy when simultaneously compressing.

    A) Limitations due to reduction in Redundancy

    The Source Coding Algorithms by default decrease the redundancy in the original dta, whereas channel coding techniques increase the redundancy. The more the redundancy, the more the secrecy can be infused according to Shannon.Hence, Channel Coding is a better choice for simultaneous encryption.

    B) Limitation on the length of the key

    Length of the key is an important factor in determining the strength of encryption. Longer lengths of keys help preventing the brute force and chosen plain text attacks.

    In Huffman coding:,the limit on using joint compression and encryption approach is that, the key length cannot be more than the number of parent nodes present in the tree. And hence maximum key length is dependent on Message characteristics and prefix coding scenario. This restraint is related to

    1. Number of symbols

    2. Probability distribution of the symbols

    However, as shown in the further work, the encryption using channel coding can have a key matrix of n*n dimensions, and hence has better key length strength.

    Combining ‘Channel’ Coding and Encryption without using freedom:

    Considering the relations established by Shannon between source coding and encryption, and looking at the possibility of exploiting the freedom in the source coding algorithms, we look for alternative possibilities for simultaneous channel coding and encryption without using the freedom within the algorithms.

    In this article, we present a Matrix-based method for simultaneous Encryption and channel coding using Linear Block Codes. The method presented is different from the previous attempts by others so far for simultaneous coding and encryption in that it does not use the freedom or choice involved in the coding algorithms. It essentially uses a key matrix to add an encryption layer on the code-word matrix. The original code-word matrix is recovered by nullifying the encryption layer at the receiving end. This needs special Key Matrix design. The design considerations for this key matrix are elaborated and the encryption procedure is explained with the help of an example for the (6,3) Linear Block Code.

    To be continued in the next part.

    New Algorithm based on The Three Pass Protocol

    Posted by CRYPTOcrat | Encryption | Sunday 3 August 2008 8:30 am

    This blog is authored by Rohit Pandharkar a fellow CRYPTOcrat.  Rohit is currently pursuing his under graduate studies at College of Engineering Pune. He has great interests in Cryptography, Mathematics and at this early stage has already published couple of research papers related to Cryptography.

    You can find more information about Rohit on his LI Profile.

    Introduction:

    Adi Shamir’s Three Pass Protocol was proposed around 1980 is a creative thought of using commutative property of certain mathematical functions. It calls for 3 passes between Alice and Bob for communicating certain message ‘x’. It enables the message to be transferred from first party to the other without exchanging any encryption keys.

    Here is a quick summary of Shamir’s idea:

    Pass 1: A to B-Transmission of a masked message ‘x’ from Alice to Bob

    Pass 2: B to A- Introducing a contribution/impression from Bob’s end

    Pass 3: A to B-Alice removes the mask on the original message ‘x’ that she had introduced but the imprint inserted by Bob still prevents it from being revealed.

    After Pass 3: Bob removes this imprint after the third pass, by a computation at his end.

    This effectively, recovers the message ‘x’ as all masks have been removed now and since this recovery happens at Bob’s end, Eavesdropper E will not have access to ‘x’

    Massey Omura Algorithm:

    Let us now look at this well known algorithm based on Shamir’s protocol:

    Alice and Bob agree over prime: ‘p’

    Alice decides private keys: m,n such that m*n=k(p-1)+1, k: Integer, secret message ‘x’ < p

    Bob decides private keys: M,N such that M*N=z(p-1)+1, z: Integer.

    Passes:

    Pass 1: Alice sends A = [xm mod p] to Bob.

    Pass 2: Bob raises A by ‘M’, and sends back B= [AM mod p]

    Pass 3: Alice raises this B by ‘n’ and sends C= [Bn mod p] to Bob

    At Bob’s end: Bob computes D=[CN mod p]=‘x’ è Secret Message

    The proof is very simple, based on Fermat’s little theorem, for details you may refer to the following link http://www.mathlab.cornell.edu/computer_and_portfolio/discrete/prime_power/

    New Proposal:

    As mentioned before the Massey Omura algorithm uses exponentiation of the part sent by Alice, Bob has to wait for Alice to send in the result of first pass. This dependency continues in the subsequent second pass, with roles reversed, and repeats in the third pass.

    This adds an element of ‘wait period’ during processing and transmission. Secondly, Bob is required to carry out (xm mod p)M.

    Now let’s look at a possible improvement in the processing above. The rationale for this improvement is as follows

    1.      Is it possible for Bob to be ready with his imprint, even before the first pass is received, and immediately add it once Bob receives it?

    2.      Is it possible to have simple multiplication by some (what we call) “Adulteration” rather than exponentiation for masking.

    So here we go:

    How about, having a Salting number‘y’, decided at Bob’s end, just like ‘x’, however, here ‘y’ need not really be a secret, but only a salting number used to add the so called adulteration. This apparently provides an affirmative answer to both the questions raised above.

    Now the requirement is, the adulteration must also be cleaned-up after third pass, at Bob’s end, so that he unfurls the real hidden message.

    All this sounds on the lines of Shamir’s core idea, however, we could use a multiplicative adulteration, instead of exponential masking used by Massey Omura.

    The questions that need deeper research are -

    1.      What are the possible constraints on the selection of salting number ‘y’ for using it as adulterating element?

    2.      How to find the Multiplicative Inverse of ‘y’ to reveal the hidden message ‘x’ enabling us to clean-up the adulteration introduced? (to be done at Bob’s end after the third pass).

    3.      Will that create symmetry issues? (Massey Omura is symmetric in terms of private key selection criterion for Alice and Bob.)

    If you have answers, more questions or comments about this post please feel free to send in those using the “Comments” section below.

    I shall leave this post unconcluded and in the following part we will look at a much detailed analysis which apparently is my proposal of this new algorithm. Stay tuned.

    Update : 3rd August 2008

    Continuing our previous discussion on this topic We will now look at the details of this proposed algorithms.

    Selection of private Keys : Alice and Bob agree over a big prime ‘p’.

    Decide over primes : Alice and Bob privately pick some large primes m and M respectively. Each also checks that their primes have no common factor with p-1. (Here p is the publicly known prime).

    Solving Diophantine equations : Alice privately finds an integer n so that m+n=(p-1)z+1, where z is any integer. And Bob finds an integer N so that that M+N=(p-1) k, where k is any integer. Then, m and n are the private keys of Alice, and M and N are private keys of Bob.

    Message and Secret number selection : Alice selects her message x and Bob decides his secret number y such that x<<p and y<<p.

    The Scheme

    Alice (User A) Bob (User B)
    Alice’s Step 1:Compute A=x m (mod p) and transmit A to Bob. Alice’s Step 2: Compute C=[(B)*x n ](mod p) and transmit  to  Bob. Bob’s Step 1: Compute B= [(A)*y M ] (mod p) and  transmit B to Alice.Bob’s Step 2: Compute D=[(C)*y N ](mod p)This will actually be the original message x.

    The proof of the algorithm is based on the Fermat’s little theorem, where in ‘y   is the salting number used for multiplicative addition. Here, the powers of ‘y’ die down, (refer selection of M+N) and only a single power of ‘x’ survives because of Fermat’s little theorem. The introduction of salting number ‘y’ and multiplicative adulteration help us.

    Questions answered by this technique

    1.      Is it possible for Bob to be ready with his imprint, even before the first pass is received, and immediately add it once Bob receives it? Yes: The imprints can be based on powers of ‘y’ which Bob already knows.

    2.      Is it possible to have simple multiplication by some (what we call) “Adulteration” rather than exponentiation for masking.

    Yes, the above algorithm does so by multiplying by yM and yN

    Now, answering further questions raised in the earlier part of the article:

    1.      What are the possible constraints on the selection of salting number ‘y’ for using it as adulterating element?

    y is an integer <p.

    2.      How to find the Multiplicative Inverse of ‘y’ to reveal the hidden message ‘x’ enabling us to clean-up the adulteration introduced? (to be done at Bob’s end after the third pass).

    The multiplicative inverse of yM is indirectly, yN by using Fermat’s little theorem.

    3.      Will that create symmetry issues? (Massey Omura is symmetric in terms of private key selection criterion for Alice and Bob.)

    Yes, it does, the selection of private keys by Bob and Alice does not use symmetric expressions.

    If you have answers, more questions or comments about this post please feel free to send in those using the “Comments” section below.

    Instruction Set Extensions – Creating A Symmetric-Key Crypto-Processor

    Posted by CRYPTOcrat | Encryption | Sunday 29 June 2008 6:29 pm

    Abstract

    In this article, the Author will talk about identifying computationally intensive operations within classifications of algorithms, such as symmetric-key ciphers.  These operations require many instructions to implement when targeting a general-purpose processor.  The concept of instruction set extensions will be introduced to accelerate these operations by off-loading them to custom hardware attached to the processor’s datapath that is accessed via newly defined instructions in the processor’s control logic.

    The article is authored by Dr. Adam Elbirt a long time CRYPTOcrat and who is currently working as an Assistant Professor at University of Massachusetts Lowell.

    You can find more information about Dr. Elbirt on his LI Profile.

    Creating A Symmetric-Key Crypto-Processor

    Most algorithms can be broken down into a finite number of core operations.  When implementing an algorithm in software targeting a general-purpose processor, some core operations are easy to implement, requiring few instructions, while others are significantly more complex, requiring numerous instructions.  An example of a core operation easily implemented in software is key addition, typically achieved by bit-wise XORing a round key with data.    Examples of more complex core operations are bit-level permutations and long number arithmetic.  Numerous instructions are required because the datapath of a general-purpose processor does not directly support the implementation of these operations due to limited processor word size, the requirement that data be operated upon in bytes or multiple of bytes instead of bits, the lack of a required ALU unit, etc.

    When using a general-purpose processor to implement symmetric-key cryptographic algorithms such as block ciphers, even the fastest software implementations cannot satisfy the required bulk data encryption data rates for high-end applications such as ATM networks which require an encryption throughput of 622 Mbps. As a result, hardware implementations are necessary for block ciphers to achieve this required performance level. Although traditional hardware implementations lack flexibility, configurable hardware devices offer a promising alternative for the implementation of processors via the use of IP cores in Application Specific Integrated Circuit (ASIC) and Field Programmable Gate Array (FPGA) technology. To illustrate, Altera Corporation offers IP core implementations of the Intel 8051 microcontroller and the Motorola 68000 processor in addition to their own Nios®-II embedded processor. Similarly, Xilinx Inc. offers IP core implementations of the PowerPC processor in addition to their own MicroBlazeTM and PicoBlazeTM embedded processors. ASIC and FPGA technologies provide the opportunity to augment the existing datapath of a processor implemented via an IP core to add acceleration modules supported through newly defined instruction set extensions targeting performance-critical functions. Moreover, many licensable and extendible processor cores are also available for the same purpose.

    The use of instruction set extensions follows the hardware/software co-design paradigm to achieve the performance and physical security associated with hardware implementations while providing the portability and flexibility traditionally associated with software implementations. Moreover, when considering alternative solutions, instruction set extensions result in significant performance improvements versus traditional software implementations with considerably reduced logic resource requirements versus hardware-only solutions such as co-processors.  The idea is to “improve the wheel” rather than to “reinvent the wheel”.

    Examples of instruction set extensions designed to improve the performance of cryptographic algorithms include those implemented to perform arithmetic over the Galois Field GF(2m), usually targeting elliptic curve cryptography (ECC) systems. Word-level polynomial multiplication was shown to be the time-critical operation when targeting an ARM processor and a special Galois Field multiplication instruction resulted in significant performance improvement. Instruction set extensions targeting a SPARC V8 processor core and a 16-bit RISC processor core were used to accelerate the multiplication of binary polynomials for arithmetic in GF(2m). An implementation targeting a MIPS32 architecture attempts to accelerate word-level polynomial multiplication through the use of Comba’s method of handling the inner loops of the multiplication operation. Numerous generalized Galois Field multipliers have also been proposed for use in elliptic curve cryptosystems. These implementations focus on accelerating exponentiation and inversion in Galois Fields GF(2m) where m ? 160-256.

    Instruction set extensions designed to minimize the number of memory accesses and accelerate the performance of AES implementations have been proposed for a wide range of processors. Extensions targeting a general-purpose RISC architecture with multimedia instructions yield strategies to implement AES using multimedia instructions while specifically attempting to minimize the number of memory accesses. While the processor is datapath-scalable, the strategies do not map well to 32-bit architectures. Extensions designed to combine the SubBytes andMixColumns AES functions into one T table look-up operation to speed up algorithm execution have also been proposed. However, the functional unit requires a significant amount of hardware to implement and cannot be used for either the final AES round (where the MixColumns function is not used) or key expansion (where the SubBytes function is used without the MixColumns function), and T table performance is heavily dependent upon available cache size. Extensions targeting the Xtensa 32-bit processor improve the performance of AES encryption but worsen the performance of decryption. An implementation targeting a LEON2 processor core combines the SubBytes and ShiftRows AES functions through the use of an instruction set extension termed sbox. Special instructions are also provided to efficiently compute the MixColumns AES function through the use of ECC instruction set extensions.

    Clearly, the use of instruction set extensions allows existing processor technologies to be leveraged in combination with custom functionality to vastly improve the performance of the targeted algorithms.  However, even within classifications of algorithms, such as symmetric-key algorithms, a wide range of additional functionality may be required to accelerate the entire suite.  A trade-off analysis of hardware resource requirements versus expected performance improvement is critical when evaluating which core elements of each algorithm to accelerate via added hardware units.  Relevant references that review the discussed implementations are included below.

    References

    1. S. Bartolini, I. Branovic, R. Giorgi, and E. Martinelli, “A Performance Evaluation of ARM ISA Extension for Elliptic Curve Cryptography Over Binary Finite Fields,” in Proceedings of the Sixteenth Symposium on Computer Architecture and High Performance Computing – SBC-PAD 2004, Foz do Igua¸cu, Brazil, October 27-29 2004, pp. 238-245.

    2. J. Großchädl and G.-A. Kamendje, “Instruction Set Extension for Fast Elliptic Curve Cryptography Over Binary Finite Fields GF(2m),” in Proceedings of the Fourteenth IEEE International Conference on Application- Specific Systems, Architectures and Processors – ASAP 2003, The Hague, The Netherlands, June 24-26 2003, pp. 455-468.

    3. J. Großchädl and E. Savas, “Instruction Set Extensions for Fast Arithmetic in Finite Fields GF(p) and GF(2m),” in Workshop on Cryptographic Hardware and Embedded Systems – CHES 2004, M. Joye and J.-J. Quisquater, Eds., Cambridge, Massachusetts, USA, August 11-13 2004, vol. LNCS 3156, pp. 133-147, Springer-Verlag.

    4. J. Irwin and D. Page, “Using Media Processors for Low-Memory AES Implementation,” in Proceedings of the Fourteenth IEEE International Conference on Application-Specific Systems, Architectures and Processors – ASAP 2003, The Hague, The Netherlands, June 24-26 2003, pp. 144-154.

    5. K. Nadehara, M. Ikekawa, and I. Kuroda, “Extended Instructions for the AES Cryptography and Their Efficient Implementation,” in Proceedings of the Eighteenth IEEE Workshop on Signal Processing Systems – SIPS 2004, Austin, Texas, USA, October 13-15 2004, pp. 152-157.

    6. S. O’Melia, Instruction Set Extensions for Enhancing the Performance of Symmetric Key Cryptographic Algorithms, MSEE Thesis, University of Massachusetts Lowell, 2005.

    7. S. Ravi, A. Raghunathan, N. Potlapally, and M. Sankaradass, “System Design Methodologies for a Wireless Security Processing Platform,” in Proceedings of the 2002 Design Automation Conference – DAC 2002, New Orleans, Louisiana, USA, June 10-14 2002, pp. 777-782.

    8. S. Tillich and J. Großchädl, “A Simple Architectural Enhancement for Fast and Flexible Elliptic Curve Cryptography Over Binary Finite Fields GF(2m),” in Proceedings of the Ninth Asia-Pacific Conference on Advances in Computer Systems Architecture – ACSAC 2004, Beijing, China, September 7-9 2004, vol. LNCS 3189, pp. 282-295, Springer-Verlag.

    9. S. Tillich and J. Großchädl, “Accelerating AES Using Instruction Set Extensions for Elliptic Curve Cryptography,” in International Conference on Computational Science and Its Applications – ICCSA 2005, O. Gervasi, M. L. Gavrilova, V. Kumar, A. Laganà, H. P. Lee, Y. Mun, D. Taniar, and C. J. K. Tan, Eds., Singapore, May 9-12 2005, vol. LNCS 3481, pp. 665-675, Springer-Verlag.

    10. S. Tillich and J. Großchädl, “Instruction Set Extensions for Efficient AES Implementation on 32-bit Processors,” in Workshop on Cryptographic Hardware and Embedded Systems – CHES 2006, L. Goubin and M. Matsui, Eds., Yokohama, Japan, October 10-13 2006, vol. LNCS 4249, pp. 270-284, Springer-Verlag.

    11. S. Tillich and J. Großchädl and A. Szekely, “An Instruction Set Extension for Fast and Memory-Efficient AES Implementation,” in Proceedings of the Ninth International Conference on Communications and Multimedia Security – CMS 2005, J. Dittmann, S. Katzenbeisser, and A. Uhl, Eds., Salzburg, Austria, September 19-21 2005, vol. LNCS 3677, pp. 11-21, Springer-Verlag.

    All the products mentioned herein which have trademarks and/or registered trademarks belong to their respective owners.

    NVIDIA CUDA Competition

    Posted by CRYPTOcrat | Crypto Application, Encryption | Tuesday 3 June 2008 7:55 pm

    Contributed by Abhishek

    This might be of interest to some of you. There is a contest held by NVIDIA, the Graphics Card leader, to experience the power of their GPUs by way of using NVIDIA CUDA™ Technology. CUDA is the world’s only C language environment that provides access to processing power of NVIDIA GPUs. This enables developers to utilize NVIDIA GPUs to solve the most complex computation-intensive challenges such as oil and gas exploration, financial risk management, product design, medical imaging, and scientific research.

    The CUDA Contest welcomes all types of mainstream stand alone applications or plug-ins, running in Windows, Linux or MacOS environments, but is looking to reward innovative, useful apps that make the best use of the GPU processing power. Scientific applications are excluded. I thought this could be of interest to the CRYPTOcrats if they would like to test/verify their algorithms or even do cryptanalysis.

    Find below the flyer of this competition. You can contact NVIDIA directly or use the comments section if you need more information.

    NVIDIA CUDA Competition Flyer

    Privacy Preserving Auctions – Part 2

    Posted by CRYPTOcrat | Crypto Application, Encryption | Thursday 22 May 2008 10:09 pm

    In continuation to Part 1

    Suppose, the security of server can’t be breached this still does not solve the problem in entirety. The problem that exists now can be formally stated as “Even if current information can be safe guarded, records of past behavior can be extremely valuable since the historical data can be used to estimate willingness to pay.” Varian [8] was first to address this problem. Let’s understand this problem with one example.Chating on the next day

    Cheating by the seller in subsequent auctions

    Refer to the illustration above. Suppose you participate in an on line second price auction for a single item on some website. You put bid $1000. Second Highest bid is $750, you win and pay $750. Next day there is an auction for same item on same website, again you bid $1000 and but now the second highest bid is $999. It’s then quite possible that web site has used your previous bid.

    To avoid certain manipulations in auctions, most desirable properties are as follows:

    • No coalition of players should be able to know your private values and manipulate output of the auction.
    • Only winner and payment should be disclosed.
    • Even after auction is closed, your private values should not be revealed.
    • Protocol should be publicly verifiable.
    • Nobody should be able to deny what he has bid.

    To achieve the above properties, the bidders themselves calculate the output of an auction. Secure Multi Party Computation (MPC) and cryptography play important role in this.

    MPC deals with evaluation of a function ‘f’, which has ‘n’ inputs, each of which is with one agent. Secure MPC protocol is designed for evaluating this function in such a way that, there is no information about the inputs of the function is leaked by the protocol. Let us say, two millionaires wish to decide who is richer between them with out revealing their actual wealth. Yao [9] proposed the protocol for solving this problem, which will decide the richer among these two millionaires, at the end of the protocol without any one knowing the actual wealth of the other. This is one example of Secure MPC.

    With advent of Internet, the design of secure auctions has become a important and challenging task. By secure auction, we mean the security of the bids even after evaluation of the auction. Researchers have used Secure-MPC, trusted third party homomorphic encryptions for designing the auctions over web. (Note: Homomorphic encryption:- If (m1, c1) and (m2, c2) are plain text – cipher text pairs, then c1 X c2 is cipher text of m1 X m2. For example, RSA, El-Gamal encryption scheme are homomorphic encryption schemes.)

    Pointers for Further Reading

    Some of the important1 research papers are:

    1. Franklin and Reiter [3]. They have proposed use of multiple servers for conducting auction. The bid is submitted partially to each server. Unless security of 1/3rd of the servers is vulnerable, the auction is secured.

    2. Nurmi and Saroma [7] has generalized Yao’s protocol for secure Vickrey auction.

    3. Naor [6] : Used Auction Issuer (AI). AI is semi-trusted third party.

    4. Kikuchi [5], Abe [1] have proposed secure auction protocols based on homomorphic encryptions.

    5. Felix Brandt [2] has designed many auctions using homomorphic encryptions. In his auction protocols, all the bidder compute output of the auction.

    Conclusion

    We saw in this article, what are the privacy issues in the online auction design. We also saw what secure auction design is and what the expected properties from an auction protocol are. We have seen some of the references for secure auction design. For concise technical summary on the topic, interested readers may refer to my talk [4].

    References

    [1] Masayuki Abe and Koutarou Suzuki. M+1 st price auction using homomorphic encryption. In PKC ‘02: Proceedings of the 5th International Workshop on Practice and Theory in Public Key Cryptosystems, pages 115-124, London, UK, 2002. Springer-Verlag.

    [2] Felix Brandt. How to obtain full privacy in auctions. International Journal of Information Security., 5(4):201-216, 2006.

    [3] Matthew K. Franklin and Michael K. Reiter. The design and implementation of a secure auction service. IEEE Trans. Software. Eng., 22(5):302-312, 1996.

    [4] Sujit Gujar and Y Narahari. Some issues in auctions with manipulative players. Technical report, Dept of CSA, Indian Institute of Science, March 2007.

    Available at http://people.csa.iisc.ernet.in/sujit/docs/perspective-reort.pdf.

    [5] Hiroaki Kikuchi. (m+1)st-price auction protocol. In FC ‘01: Proceedings of the 5th International Conference on Financial Cryptography, pages 351-363, London, UK, 2002. Springer-Verlag.

    [6] Moni Naor, Benny Pinkas, and Reuban Sumner. Privacy preserving auctions and mechanism design. In EC ‘99: Proceedings of the 1st ACM conference on Electronic commerce, pages 129-139, New York, NY, USA, 1999. ACM Press.

    [7] Hannu Nurmi and Arto Salomaa. Cryptographic protocols for vickrey auctions. In Group Decision and Negotiation, pages 363-373. Springer Netherlands, 1993.

    [8] H. R. Varian. Economic mechanism design for computerized agents. In 1st USENIX Workshop on Electronic Commerce, 1995.

    [9] Andrew Chi-Chih Yao. Protocols for secure computations (extended abstract). In In Proceedings of 23rd Annual Symposium on Foundations of Computer Science, FOCS’82, pages 160-164, 1982.

    …………………………………………………………………………………………………………………………………………..

    Author:

    Sujit P Gujar a long time CRYPTOcrat. More information about Sujit on this link.

    Privacy Preserving Auctions – Part 1

    Posted by CRYPTOcrat | Crypto Application, Encryption | Saturday 17 May 2008 6:57 am

    Author:

    Sujit P Gujar a long time CRYPTOcrat. More information about Sujit on this link.

    Abstract:

    In this article, I will talk about what are auctions, what are important issues in online auctions and why cryptography plays an important role in it. I will also provide pointers for technical details n privacy preserving auctions.

    Auctions are popular mechanisms used for buying or selling different items through a bidding process. Before going into detail about privacy issues in auctions, I will explain what are two important types of auctions are available. Consider a case where there is a seller who wishes to sell a single unit of an indivisible (that is, the item has to be allocated single piece). There are multiple buyers for this item. For example, auction for an antique painting, auction for oil drilling rights over a region. Each bidder/buyer has maximum willingness to pay. This is referred as valuation or type information of an agent. This information is private to the agents. The auctioneer has to specify what are allocation rules and payment rules. Though there exist plenty of varieties of auctions, the most popular are:

    First price auction

    Potential buyers submit sealed bids and the highest bidder is awarded the item. The winning bidder pays the price that he or she has bid.

    Second Price Auction

    This is also called the Vickrey auction. Potential buyers submit sealed bids and the highest bidder is awarded the item. The winning bidder pays a price equal to the second highest bid (which is also the highest losing bid). In first price auction, intelligent bidders, will not bid their valuations. But, Vickrey has shown that in second price auction everybody should bid there maximum willingness to pay.

    Online Auction

    With the advent of the Internet era, on line auctions are widely used. As the security of the servers is continuously being challenged by hackers and malicious users, some of the players may be able to breach the security and gain some knowledge of the private information of the other players. This type of manipulation is possible by both sellers and buyers. A seller can profitably cheat by examining bids before auction clears and submitting an extra bid under false identity. This type of bidding is called shill bidding. In the case of first price auctions, the seller does not have any advantage to report shill bid as either shill bid wins and trade does not happen or winner pays whatever he has bid. In the case of the second price auction, submitting true type information (that is true bid) is a dominant strategy, irrespective of others’ bids. So if the buyer gets access to the other bids in Vickrey auction, his strategy is not going change. Thus, there is no need to study cheating by the seller in the first price auctions and the cheating by the buyer in the second price auction. However, cheating is possible in the following two scenarios:

    1. Manipulative Seller in second Price Auction
    2. Manipulative Buyers in first Price Auction

    The Case of a Manipulative Seller in Second Price Auction

    A seller can profitably cheat in a second price auction. For example, if the bidders in an eBay auction each use a proxy bidder (essentially creating a second-price auction), then the seller may be able to break into the eBay’s server, observe the maximum price that a bidder is willing to pay, and then extract this price by submitting a shill bid just below it using a false identity.

    Here is an example to explain this case

    Cheating by the seller

    Cheating by a seller

    Refer to the illustration above. Suppose Mr. C announces an online second price auction for selling an indivisible item. Ms. H participates in this auction and bids her valuation $1000. Mr. C breaks the server before auction is closed and finds out the highest bid by Ms. H and places a shill bid of $999. Ms. H has no option but to pay $999.

    The Case of Manipulative Buyers in a First Price Auction

    We now consider the case in which the seller is honest, but there is a chance that some of the buyers will cheat and examine the others’ bids before submitting their own (or, alternatively, they will revise their bid before the auction clears).

    Here is an example to explain this case

    Cheating by a buyer

    Refer to the illustration above. Mr. A announces an online first price auction for selling an indivisible item. Ms. H participates in this auction. She has a valuation $1000 for the item and bids $700 in anticipation of maximizing her utility. Ms. C, her competitor who has valuation $730, breaks the server before the auction is closed and finds out the highest bid by Ms. H and places her own bid as $701. Ms C. will win the object, though she has lower valuation.

    The above discussion clearly highlights the importance of security and particularly cryptography in auction design. Cryptography can play a bigger role than just solving the problems mentioned above.

    Let’s break here for now and we shall continue this discussion in the next part of this article.

    To be concluded…

    OpenSSL RNG problem discovered on Debian!

    Posted by Amit | Encryption | Thursday 15 May 2008 12:10 pm

    Well.. Ok.. I promise this will be a short-n-quick one and we SHALT have the article series as planned..

    However, this NEWS just shocked me so could not resist of sharing with you all. It won’t be wrong to say most of us would have used OpenSSL package at least once during their professional life as a security developer. It is as good as a cult for many. (If you haven’t and don’t know don’t miss out on the opportunity to do so now. Here is the link to know more OpenSSL.)

    I for one have been using openssl since the days it used to be called SSLeay, so that should be far back as 1997. It’s a masterpiece really! One of the most widely used and well supported packages in the Security/Crypto community.

    Returning from my strong sentimental attachment to OpenSSL back to the reason for this post. Here is the link that describes the OpenSSL Random Number Generator issue. Friends at SecurityFocus.com has some more details about who is affected by this vulnerability. Here is the link. The issue seriously affects the uniqueness of the keys generated on Debian making them predictable.

    I think this brings us back to one of the earlier topics about Personalization of Private Keys.

    Did any of you get affected? Any thoughts? Do write in your experiences in the comments section.

    Article on “Privacy Preserving Auctions”

    Posted by Amit | Crypto Application, Encryption | Monday 12 May 2008 10:56 am
    In this article, the Author will talk about what are auctions, what are important issues in online auctions and why Cryptography plays an important role in it. He will also provide pointers for technical details in privacy preserving auctions.

    The article is authored by Sujit P Gujar who is a long time CRYPTOcrat (http://www.linkedin.com/pub/5/401/925) and he is currently a research scholar at Indian Institute of Sciences, Bangalore, India.

    You can find more information about Sujit on his website here: http://people.csa.iisc.ernet.in/sujit

    For easier reading this soon to be published article is split into two parts.

    Gnidaer yppah ;)  

    Next Page »
    soccerine Wordpress Theme