# Information Security Seminar

## Past sessions

### Trapdoor Hash Functions and their Applications

We introduce a new primitive, called trapdoor hash functions (TDH), which are compressing hash functions with additional trapdoor function-like properties. Specifically, given an index $i$, TDHs allow for sampling an encoding key $e_k$ (which hides $i$) along with a corresponding trapdoor. Furthermore, given a hash value $H(x)$, a hint value $E(e_k,x)$, and the trapdoor corresponding to $e_k$, the $i$-th bit of $x$ can be efficiently recovered. In this setting, one of our main questions is: How small can the hint value $E(e_k,x)$ be? We obtain constructions where the hint is only one bit long based on DDH, QR, DCR, or LWE.

As the main application, we obtain the first constructions of private information retrieval (PIR) protocols with communication cost poly-logarithmic in the database size based on DDH or QR. These protocols are in fact rate-1 when considering block PIR.

### RLWE-based authentication and key reuse for RLWE-based key exchanges

In this talk, we will present new authentication schemes, where a prover tries to proves that he or she knows a secret solution to the RLWE problem without revealing any information on the solution. For an interactive scheme, we can prove the zero-knowledge property. We will also present a non-interactive scheme, and show how it can be used to build RLWE based key exchanges, where key reuse is also secure.

### Defining privacy is supposed to be easy

Formally specifying privacy goals is not trivial. The most widely used approach in formal methods is based on the static equivalence of frames in the applied pi-calculus, basically asking whether or not the intruder is able to distinguish two given worlds. A subtle question is how we can be sure that we have specified all pairs of worlds to properly reflect our intuitive privacy goal. To address this problem, we introduce in this paper a novel and declarative way to specify privacy goals, called $$\alpha$$-$$\beta$$ privacy, and relate it to static equivalence. This new approach is based on specifying two formulae $$\alpha$$ and $$\beta$$ in first-order logic with Herbrand universes, where $$\alpha$$ reflects the intentionally released information and $$\beta$$ includes the actual cryptographic ("technical") messages the intruder can see. Then $$\alpha$$-$$\beta$$ privacy means that the intruder cannot derive any "non-technical" statement from $$\beta$$ that he cannot derive from $$\alpha$$ already. We describe by a variety of examples how this notion can be used in practice. Even though $$\alpha$$-$$\beta$$ privacy does not directly contain a notion of distinguishing between worlds, there is a close relationship to static equivalence of frames that we investigate formally. This allows us to justify (and criticize) the specifications that are currently used in verification tools, and obtain partial tool support for $$\alpha$$-$$\beta$$ privacy. Joint work with Sebastian Moedersheim and Thomas Gross

### Improving classical authentication over a quantum channel

We introduce a quantum protocol to authenticate classical messages that can be used to replace Wegman-Carter's classical authentication scheme in quantum key distribution (QKD) protocols. We show that the proposed scheme achieves greater conditional entropy of the seed for the intruder given her (quantum) observation than the classical case. Our protocol is an improvement over a classical scheme by Brassard, taking advantage of quantum channel properties. It is motivated by information-theoretical results. However by adopting it, QKD becomes a fully quantum protocol. We prove that quantum resources can improve both the secrecy of the key generated by the PRG and the secrecy of the tag obtained with a hidden hash function. We conclude that the proposed quantum encoding offers more security than the classical scheme and, by applying a classical result, we show that it can be used under noisy quantum channels. Joint work with F. Assis, A. Stojanovic and Y. Omar.
Joint session with the Logic and Computation Seminar

### Quantum Hardcores and Quantum Public-Key Cryptosystems

We will cover two important notions in quantum cryptography and give their concrete examples: quantum hardcores and quantum public-key cryptosystems. 1) Hardcore functions have played an essential role in building a secure cryptosystem. They are closely associated with the list-decodability of certain codes. We establish a close relationship between quantum hardcore functions and quantum list-decoding. From three classical codes, we construct three new quantum hardcore functions for quantum one-way functions. 2) A private-key cryptosystem requires a large number of keys whereas a public-key cryptosystem needs only a single encoding key for all senders. To develop a large scale quantum network in a near future, it is thus desirable to build an efficient public-key quantum cryptosystem. We present the first quantum public-key cryptosystem that withstands any eavesdropper’s chosen plaintext quantum attack if a certain graph-theoretic problem cannot be solved efficiently by quantum computers.

### Symbolic Probabilistic Analysis of Off-line Guessing

Much work has been devoted to strengthening the guarantees provided by symbolic methods for protocol analysis, either by automatically obtaining proofs of computational security or by proving that security with respect to a symbolic model implies computational security. These results require strong assumptions on the security of the cryptographic primitives used. Such assumptions are often not verified by real-world implementations.

In this talk we present a fundamentally new approach to this problem. We introduce a symbolic framework for the analysis of protocol security against a stronger attacker than that considered in existing symbolic methods.

We provide a general method for expressing properties of cryptographic primitives. It allows modeling, for instance, message redundancy, weak random generation algorithms, or the leakage of partial information about a plaintext by an encryption scheme. These properties can be used to automatically find attacks and estimate their success probability. Under assumptions about the accuracy of the properties considered, we show that our estimates differ negligibly from the real-world success probability.

As case studies, we model non-trivial properties of RSA encryption and estimate the probability of off-line guessing attacks on the EKE protocol. These attacks are outside the scope of existing symbolic methods.

### Individual Witness Hiding

In an interactive proof systems, there is a prover and a verifier and the goal of the prover is to convince the verifier of the validity of "$x\in L$'', for some language $L$. A natural question arises: "What knowledge does the verifier gain during the interaction?'' The definitions of zero knowledge and witness hiding are based on the existence of simulators or witness extractors. One might raise some questions about such "operational'' definitions. For instance, can there be protocols which actually leak no information to the verifier, but which are so obfuscating that there does not exist a simulator? Also, the existence of a simulator might be too weak to guarantee that no information is leaked to the verifier in any instances of the protocol. We propose a new approach based on Kolmogorov complexity, a rigorous measure of the amount of information contained in an object, usually a string. It is the difference between the Kolmogorov complexity of the witness when one does not see the conversation and one sees it. We explore some of its properties, design an IWHP for all languages in $fewp$. This is a joint work with Luís Antunes, Sophie Laplante, Paulo Mateus and Andreia Teixeira

### Fitting's embedding of first-order logic into first order S4 and the computationally complete symbolic attacker

We present how first-order logic and a computational semantics defined via Fitting's embedding of FOL into first-order S4 can be used to define symbolic attackers of security protocols that are at least as powerful as the computational ones.

### Automated Verification of Equivalence Properties of Cryptographic Protocols

Indistinguishability properties are essential in formal verification of cryptographic protocols. They are needed to model anonymity properties, strong versions of confidentiality and resistance to offline guessing attacks, and can be conveniently modeled using process equivalences. We present a novel procedure to verify equivalence properties for bounded number of sessions. Our procedure is able to verify trace equivalence for determinate cryptographic protocols. On determinate protocols, trace equivalence coincides with observational equivalence which can therefore be automatically verified for such processes. When protocols are not determinate our procedure can be used for both under- and over-approximations of trace equivalence, which proved successful on examples. The procedure can handle a large set of cryptographic primitives, namely those which can be modeled by an optimally reducing convergent rewrite system. Although, we were unable to prove its termination, it has been implemented in a prototype tool and has been effectively tested on examples, some of which were outside the scope of existing tools.

### FAST: An efficient decision procedure for Static Equivalence and Message Deducibility for subterm convergent equational theories

Cryptographic protocols are widely used to provide secure communication over insecure networks, and much work has been devoted to ensure their correctness. Symbolic approaches to this task have achieved significant success: based on them, several automatic protocol verification tools have been implemented, and these have been successful in finding many protocol attacks. Message deducibility and static equivalence are two central problems in these approaches. The attacker's capabilities are usually expressed in terms of message deducibility, i.e., which messages can the attacker learn. Static equivalence is an indistinguishability-based notion of knowledge that can be used to model strong secrecy and has has been used, e.g., to study off-line guessing attacks and e-voting protocols. FAST is an efficient decision procedure for these two problems under subterm convergent equational theories. This class of equational theories is sufficient for describing a broad range of cryptographic primitives. In this talk we compare FAST with other existing tools, both in terms of theoretical complexity and performance in practice. For subterm convergent equational theories, FAST has a significantly better asymptotic complexity, and its performance in practice confirms this: for many practically relevant examples, FAST terminates in under a second, whereas other tools take several minutes.

### Programming Secrecy in Untrusted Networks

The idea that the restriction operator of the pi-calculus (new) and its scope extrusion mechanism can model the possession and communication of a secret goes back to the spi-calculus. In this approach, one can devise specifications of protocols that rely on channel-based communication protected by new, and establish the desired security properties by using equivalences representing indistinguishability in the presence of a Dolev-Yao attacker. However, while some protocol can assume dedicated channels to be available, it is not obvious how to enforce secret communications over the Transport/Internet layer. In practice, the secrecy of the open communications is recovered by relying on cryptographic mechanisms which can be very complex in the presence of scope extrusion. Based on these considerations, in this talk we enrich the pi-calculus with an operator for confidentiality (hide), whose main effect is to restrict the access to the communication channel, thus representing confidentiality in a natural way. The hide operator is meant for local communication, and it differs from new in that it forbids the extrusion of the channel and hence has a static scope. Consequently, a communication channel in the scope of a hide can be implemented as a dedicated channel, and it is more secure than one in the scope of a new. To emphasize the difference, we introduce a spy process that represents a side-channel attack and breaks some of the standard security equations for new. To formally reason on the security guarantees provided by the hide construct, we introduce an observational theory and establish stronger equivalences by relying on a proof technique based on bisimulation semantics. Joint work with Catuscia Palamidessi and Frank D. Valencia.

### A Hierarchy of Knowledge for the Formal Analysis of Security-Sensitive Business Processes

Security-sensitive business processes are business processes that must comply with security requirements such as authorization constraints or separation or binding of duty. As such, they are difficult to design and notoriously prone to error, and a number of approaches have been proposed to formalizing and reasoning about models of such processes to detect potential vulnerabilities. In this paper, we present an approach that introduces the notion of knowledge for the formal analysis of security-sensitive business processes. We structure knowledge hierarchically, in different levels that can interact with each other in order to derive new information, which allows us to specify at different levels information about sets of critical tasks and thereby control the process execution and enforce security properties. Joint work with Simone Marchesini.

### Attack Interference in Non-Collaborative Scenarios for Security Protocol Analysis

In security protocol analysis, the traditional choice to consider asingle Dolev-Yao attacker is supported by the fact that models withmultiple collaborating Dolev-Yao attackers have been shown to bereducible to models with one Dolev-Yao attacker. In this paper, we take a fundamentally different approach and investigate the case of multiple non-collaborating attackers. After formalizing the framework for multi-attacker scenarios, we show with a case study that concurrent competitive attacks can interfere with each other. We then present a new strategy to defend security protocols, based on active exploitation of attack interference. The paper can be seen as providing two proof-of-concept results: (i) it is possible to exploit interference to mitigate protocol vulnerabilities, thus providing a form of protection to protocols; (ii) the search for defense strategies requires scenarios with at least two attackers. joint work with M.-Camilla Fiazza and Michele Peroli
Note the exceptional time and location.

### On Extracting Security Policies From Program Invariants - A static analysis for dynamic decision making

This presentation approaches the problem of verifying program compliance with information flow policies by proposing a framework that enables dealing with complex and dynamic policies in an efficient and flexible manner. We introduce a calculus for extracting the fundamental dependencies that are encoded into a program which is proved to be both sound and optimal. From the output of this analysis, the strictest security policy under which a program may be executed is then statically inferred. This policy can be used to dynamically decide whether a program is allowed to run, or as a comprehensive and succinct digest of the reasons for which a program is not deemed secure.
This session will be held in Room 0.32 of TagusPark!

### Security in Hop web Applications

I will present common security problems related to integrity violations in web applications and work in progress towards providing solutions for applications written in the Hop language. We are interested in providing formal bases to prove that in web applications (and more generally generated mashups): (i) trusted components are code injection free at run-time (ii) t untrusted components influence to trusted components is limited to a set of given authorized resources. We investigate formalizations of these properties by means of web applications abstract semantics and discuss correct implementations to enforce them.
Note Exceptional Day and Room!

### Automated Validation of Security-sensitive Web Services specified in BPEL and RBAC

We formalize automated analysis techniques for the validation of web services specified in BPEL and a RBAC variant tailored to BPEL. The idea is to use decidable fragments of first- order logic to describe the state space of a certain class of web services and then use state-of-the-art SMT solvers to handle their reachability problems. To assess the practical viability of our approach, we have developed a prototype tool implementing our techniques and applied it to a digital contract signing service inspired by an industrial case study. Joint Work with Alberto Calvi and Silvio Ranise.

### Randomness Extractors

The main problem of many practical random number generators is that they produce non-uniform, i.e. biased, output. Moreover, the actual probability distribution may be not fixed and can be (in a limited way) controlled by an adversary. The main goal of randomness extractors is to postprocess the output of an extractor in such a way that the extractor output is (almost) uniformly distributed. A dual siuation is when the adversary does not control the probability distribution of the random number generator, but can learn some information (fixed number of bits) about the bit sequence output by the generator. It is easy to show that such situation is equivalent to modification of the probability distribution and extractors are able to annihilate adversary's knowledge, i.e. to produce output adversary has (almost) no information about. This is also tightly related to the problem of privacy amplification, where two communicating participants want to eliminate adversary's (limited) knowledge of a commonly shared bit string using public discussion the adversary can eavesdrop.
Joint session with the Information Security Seminar

### Secrecy-Oriented, Computationally Sound First-Order LogicalAnalysis of Cryptographic Protocols

We present a computationally sound first-order system for security-analysis of protocols that places secrecy of nonces and keys in its center. Even trace properties such as agreement and authentication are proven via proving a non-trace property, namely, secrecy first with an inductive method. These security properties are derived directly from a set of computationally sound axioms, without the need to formalize adversarial capabilities explicitly. This results a very powerful system, the working of which we illustrate on the agreement and authentication proofs for the Needham-Schroeder-Lowe public-key and the amended Needham-Schroeder shared-key protocols in case of unlimited sessions. Unlike other available formal verification techniques, computational soundness of our approach does not require any idealizations about parsing of bitstrings or unnecessary tagging. In particular, we have control over detecting and eliminating the possibility of type-flaw attacks.

### Formal Specification and Verification of Security Services

A framework for specification and verification of security properties in service-oriented architectures is proposed. The framework encompasses communication level events and policy level decisions, and allows to reason about the interface between the two. A decision procedure is given for the reachability problem in a fragment of service-oriented architectures, where the Dolev-Yao attacker is in control of the communication level, and the policies of services belong to a certain subset of Horn theories. This subset is in particular sufficient for expressing typical RBAC systems with role hierarchy, as well as trust delegation and trust application.
Note exceptional week day and room.

### Central Security Requirements of E-Voting Systems: Verifiability, Accountability, and Coercion-Resistance

Systems for electronic voting (e-voting systems), including systems for voting over the Internet and systems for voting in a voting booth, are supposed to simplify the process of carrying out elections and potentially make the voting process more secure and reliable. These systems are among the most challenging and complex security-critical systems, with a rich set of intricate security requirements they have to fulfill. For example, besides keeping the votes of individual voters private (privacy of votes), they should allow voters to check that their votes were counted correctly (verifiability), since voting machines might have programming errors or might have been tampered with. By providing voters with receipts of how they voted, this problem on its own is easy to solve. However, at the same time vote buying and voter coercion should be prevented (coercion-resistance). Moreover, in case a problem occurs, it should be possible to hold election officials accountable for their misbehavior (accountability). In recent work, we have formalized important security requirements for e-voting systems, including coercion-resistance, verifiability, and accountability. In this talk, I will discuss these requirements informally and sketch our formal definitions. I will also present several state-of-the-art voting systems and discuss their security with respect to our security definitions. The talk is intended for a broad audience and does not require any background on e-voting systems or cryptography. This talk is based on joint work with Tomasz Truderung and Andreas Vogt.
Note exceptional week day and room.

Older session pages: Previous 2 3 Oldest