# Logic and Computation Seminar

## Past sessions

### Galois theory of analogical classifiers

Analogical proportions are statements of the form "a is to b as c is to d", denoted a:b::c:d) where a, b, c, d are tuples of attribute values describing items. Analogical inference relies on the idea that if four instances a, b, c, d are in analogical proportion for most of the attributes describing them, then it may still be the case for the other attributes. Similarly, if class labels are known for a, b, c and unknown for d, then one may infer the label for d as a solution of an analogical proportion equation.

Theoretically, it is quite challenging to characterize situations where such an analogical inference principle (AIP) can be soundly applied. In case of Boolean attributes, a first step for explaining the analogical mechanism was to characterize the set of classifiers for which AIP is sound (i.e., no error occurs). At IJCAI 2017, we took the minimal model of analogy (i.e., containing only patterns of the form x : x :: y : y and x : y :: x : y) and showed that these analogical classifiers coincide with the set of affine Boolean functions. Moreover, when the function is close to being affine, we showed at IJCAI 2018 that the prediction accuracy remains high. These results were then extended at SUM 2020 to nominal domains.

However, the notion of analogy preservation gives rise to a Galois connection between classifiers and the models of analogy that they preserve. In this talk, we will explore this polarity and establish a Galois theory of analogical classifiers. We will also derive several consequences and, if time allows, we will discuss recent applications in NLP related tasks.

Most of the results that will be presented constitute ongoing research work being developed with Erkko Lehtonen, Esteban Marquer and Pierre-Alexandre Murena, Nicolas Hug, Henri Prade and Gilles Richard.

### The mother of all leakages: How to simulate noisy leakages via bounded leakage (almost) for free

The ubiquity of real-world side-channel attacks has led to the rise of leakage-resilient cryptography -- cryptographic schemes which remain secure even when some side information is leaked from supposedly secret system components, such as private keys. Most works in leakage-resilient cryptography consider the Bounded Leakage Model, where one assumes that the side information leaked is bounded in length. However, leakage length is not a reliable estimate of leakage quality. For example, temperature or computation time measurements may require many more bits to be described than the private key under attack, but this does not necessarily mean that they fully reveal the key. Motivated by this, some works have considered the more general Noisy Leakage Model instead, where it is only required that the leakage is a sufficiently "noisy" version of the secret information, for various measures of "noise".

Given that bounded leakage is a (very special) sub-case of noisy leakage, in this talk we will be interested in the following question:

What does bounded leakage-resilience tell us about noisy leakage-resilience?

Surprisingly, we show that for common models of noisy leakage it is possible to simulate one query of noisy leakage using one query of bounded leakage with small error in the information-theoretic setting. In particular, this implies that cryptographic schemes secure against bounded leakage are also secure against noisy leakage with almost no loss in security. To complement the above, we show that our reductions are nearly-optimal.

Based on joint work with Gianluca Brian, Antonio Faonio, Maciej Obremski, Mark Simkin, Maciej Skórski, and Daniele Venturi. Available at https://eprint.iacr.org/2020/1246.

### Quasi-Nelson algebras and their logic

We introduce a generalisation of Nelson algebras having a not necessarily involutive negation. We suggest to dub this class quasi-Nelson algebras, in analogy with quasi-De Morgan lattices, which are a non-involutive generalisation of De Morgan lattices due to Sankappanavar. We show that, similarly to the involutive case (and somewhat surprisingly), our new class of algebras can be equivalently presented as (1) quasi-Nelson residuated lattices, i.e. models of the well-known Full Lambek calculus with exchange and weakening, extended with the Nelson axiom; (2) non-involutive twist-structures, i.e. special binary products of Heyting algebras, which generalise a well-known construction for representing algebraic models of Nelson’s constructive logic with strong negation; (3) quasi-Nelson algebras, i.e. models of non-involutive Nelson logic, viewed as a conservative expansion of the negation-free fragment of intuitionistic logic; (4) the class of bounded commutative integral residuated lattices which satisfy a universal algebraic property that we introduced in a previous paper, called $(0, 1)$-congruence orderability. The equivalence of the four presentations, and in particular the extension of the twist-structure representation to the non-involutive case, is the main technical result of the paper. We hope, however, that the main impact will lie in the possibility of opening new ways to (1) obtain deeper insights into the distinguishing feature of Nelson’s logic (i.e. the Nelson axiom) and its algebraic counterpart (the Nelson identity); (2) be able to investigate certain purely algebraic properties (such as 3-potency and $(0, 1)$-congruence orderability) in a more general setting.

### Nelson’s logic S

Besides the better-known Nelson logic (N3) and paraconsistent Nelson logic (N4), in 1959 David Nelson introduced, with motivations of arithmetic and constructibility, a logic that he called S. The logic S was originally introduced by means of a calculus (crucially lacking the contraction rule) with infinitely many rule schemata and no semantics (other than the intended interpretation into arithmetic). We look here at the propositional fragment of S, showing that it is algebraizable (in fact, implicative) in the sense of Blok and Pigozzi with respect to a variety of three-potent involutive residuated lattices. We thus introduce the first known algebraic semantics for S as well as a finite Hilbert-style calculus equivalent to Nelson’s presentation; this also allows us to settle the relation between S and the other two above-mentioned Nelson logics.

Note the unusual weekday.

### Qualitative and quantitative notions of consistency and contradiction

Possibility and necessity theories rival with probability in representing uncertain knowledge, while offering a more qualitative view of uncertainty. Moreover, necessity and possibility measures constitute, respectively, lower and upper bounds for probability measures, with the advantage of avoiding the complications of the notion of probabilistic independence.

On the other hand, paraconsistent formal systems, especially the Logics of Formal Inconsistency, are capable of quite carefully expressing the circumstances of reasoning with contradictions. The aim of this talk is to merge these ideas, by precisely defining new notions of possibility and necessity theories involving the concept of consistency (generalizing the proposal by (Besnard & Lang 1994)) based on connecting them to the notion of partial and conclusive evidence. This combination permits a whole treatment of contradictions, both local and global, including a gradual handling of the notion of contradiction, thus obtaining a really useful tool for AI and machine learning, with potential applications in logic programming via appropriate resolution rules.

Note the change to room 4.35.

### Disjoint fibring of non-deterministic matrices

We give a first definitive step towards endowing the general mechanism for combining logics known as fibring with a meaningful and useful semantics given by non-deterministic logical matrices (Nmatrices). We present and study the properties of two semantical operations: a unary operation of ω-power of a given Nmatrix, and a binary operation of strict product of Nmatrices with disjoint similarity types (signatures). We show that, together, these operations can be used to characterize the disjoint fibring of propositional logics, when each of these logics is presented by a single Nmatrix. As an outcome, we also provide a decidability and complexity result about the resulting fibred logic. We illustrate the constructions with a few meaningful examples.

Joint work with Carlos Caleiro.

Note the unusual week day and time of this seminar

### Formally Verifying the Solution to the Boolean Pythagorean Triples Problem

The Boolean Pythagorean Triples problem asks: does there exist a binary coloring of the natural numbers such that every Pythagorean triple contains an element of each color? This problem was first solved in 2016, when Heule, Kullmann and Marek encoded a finite restriction of this problem as a propositional formula and showed its unsatisfiability. In this work we formalize their development in the theorem prover Coq. We state the Boolean Pythagorean Triples problem in Coq, define its encoding as a propositional formula and establish the relation between solutions to the problem and satisfying assignments to the formula. We verify Heule et al.'s proof by showing that the symmetry breaks they introduced to simplify the propositional formula are sound, and by implementing a correct-by-construction checker for proofs of unsatisfiability based on reverse unit propagation.

Joint work with João Marques-Silva and Peter Schneider-Kamp.

### Quantitative logic reasoning

We present a research program which investigates the intersection of deductive reasoning with explicit quantitative capabilities. These quantitative capabilities encompass probabilistic reasoning, counting and counting quantifiers, and similar systems.

The need to have a combined reasoning system that enables a unified way to reason with quantities has always been recognized in modern logic, as proposals of logic probabilistic reasoning are present in the work of Boole [1854]. Equally ubiquitous is the need to deal with cardinality restrictions on finite sets.

We actually show that there is a common way to deal with these several deductive quantitative capabilities, involving a framework based on Linear Algebras and Linear Programming, and the distinction between probabilistic and cardinality reasoning arising from the different family of algebras employed.

The quantitative logic systems are particularly amenable to the introduction of inconsistency measurements, which quantify the degree of inconsistency of a given quantitative logic theory, following some basic principles of inconsistency measurements.

Thus, Quantitative Reasoning is presented as a non-explosive reasoning method that provides a reasoning tool in the presence of quantitative logic inconsistencies, based on the principle that inference can be obtained by minimizing the inconsistency measurement.

### Foundational questions in choreographic programming

Choreographies are widely used for the specification of concurrent and distributed software architectures. In previous work, we proposed Core Choreographies (CC), a model for investigating foundational questions in choreography calculi. In this talk, we discuss two such questions: asynchrony and behaviour extraction.

Asynchronous communications are ubiquitous in real-world systems, and we show how it can be modeled either by endowing CC with a different semantics or by minimally extending its syntax and implementing message queues. Since CC is a core model, these constructions can be generalized to most choreography languages.

Extraction concerns the issue of deciding whether the behaviour of a given process network can be expressed as a choreography, and if possible finding this choreography. In general, these questions are undecidable; we show that in CC we can find algorithms to answer them.

### On Expressive Power of Regular Expressions over Infinite Orders

Two fundamental results of classical automata theory are the Kleene theorem and the Buchi-Elgot-Trakhtenbrot theorem. Kleene's theorem states that a language of finite words is definable by a regular expression iff it is accepted by a finite state automaton. Buchi-Elgot-Trakhtenbrot's theorem states that a language of finite words is accepted by a finite-state automaton iff it is definable in the weak monadic second-order logic.

Hence, the weak monadic logic and regular expressions are expressively equivalent over finite words. We generalize this to words over arbitrary linear orders.

Please note that this session is scheduled for Thursday, rather than the more usual Friday, and in a different room.

### Monosequent proof systems

In this talk I will discuss systems of sequents with only one formula on the right and one formula on the left. I will compare these systems with other ones (standard systems, intuitionist and dual intuitionist sequent systems), and show how we can generalize known results, in particular cut-elimination, to this framework. I will also study some specific cases/applications of monosequents such as non-distributive logics.

### On Extensions of Fuzzy Semantics for Propositional Connectives

There are several operators related to fuzzy logic which in general are generalization of classical ones. For instance, triangular norms and conorms (t-norms and t-conorms for short) generalize the operators "and" and "or" from classical logic. A natural question arises: Without loss of generality if we consider a t-norm T defined on a sublattice M of bounded lattice L which satisfies a property P is it possible (1) to extend T to a function T' on L in such way that T' is also a t-norm and (2) T' also satisfy the property P. In this talk we present two different methods for extending t-norms, t-conorms, fuzzy negations and implications (namely, extension via retractions and extension via e-operators) from a sublattice to a lattice considering a generalized notion of sub lattices. For both methods goal (1) is completely achieved, however extension via retractions fails in preserving some properties of these fuzzy connectives.

### Introducing I-distances

In this talk we present a new generalization of the mathematical notion of distance. It is based on the abstraction of the codomain of the distance function. The resulting functions must satisfy a generalized triangular inequality, which depends only on the order structure of the valuation space, i.e., a monoid structure is not required. This type of function will be called i-Distance (i-metric, i-quasi-metric, etc). We show that they generate a topology in a very natural way based on open balls. This concept has been successfully applied in the field of Clustering Algorithms and Pattern Recognition. We show how the concept was applied in this field as well as some theoretical results.

### Super-Belnap logics

Belnap-Dunn logic is a well-known four-valued logic that has been widely applied in computer science since its introduction forty years ago. From an algebraic logic point of view, it is an interesting and natural example of a non-protoalgebraic logic. I will take a look at extensions (i.e. stronger logics, but in the same language) of this logic. The final aim would be to describe their hierarchy and providing axiomatizations for (some of) them. However, major difficulties in accomplishing this stem from the fact that (1) most super-Belnap logics are themselves non-protoalgebraic and (2) there are (at least) uncountably many of them.

### The Leibniz hierarchy via the Suszko operator

The main classification of sentential logics in Abstract Algebraic Logic (AAL) is the so-called Leibniz hierarchy. It has its roots in the seminal work of Blok and Pigozzi on algebraizable logics, and it classifies a given logic according to algebraic properties of the Leibniz operator over the logical filters. In this talk we shall focus on the less-known Suszko operator, introduced by Czelakowski, which seems to play a more significant rôle when dealing with non-protoalgebraic logics. We present some recent developments in AAL arising from this shifting of paradigm-operator. Namely, we characterize the main classes of logics within the Leibniz hierarchy through algebraic properties of the Suszko operator, and present new isomorphism theorems for protoalgebraic and equivalential logics, in the same spirit of the known ones for weakly-algebraizable and algebraizable logics. This is a joint work with Ramon Jansana and Josep Maria Font.

### Modal logic need not be about necessity

This talk will look at non-classical negations and their corresponding adjustment connectives from a modal viewpoint, over complete distributive lattices, and apply a very general mechanism in order to offer adequate analytic proof systems to logics that are based on them. Defining non-classical negations within usual modal semantics automatically allows one to treat equivalent formulas as synonymous, and to have a natural justification for a global version of the contraposition rule. From that perspective, our study offers a particularly useful environment in which negative modalities and their companions may be used for dealing with inconsistency and indeterminacy. After investigating modal logics based on arbitrary frames, we extend the results to serial frames, reflexive frames, and functional frames. In each case we also investigate when and how classical negation may thereby be defined. This reports on joint work with Ori Lahav and Yoni Zohar.

### Reachability Problems for Continuous Linear Dynamical Systems

This talk is about reachability problems for continuous linear dynamical systems. A central decision problem in this area is the Continuous Skolem Problem, which asks whether a real-valued function satisfying an ordinary linear differential equation has a zero. This can be seen as a continuous analog of the Skolem Problem for linear recurrence sequences, which asks whether the sequence satisfying a given recurrence has a zero term. For both the discrete and continuous versions of the Skolem Problem, decidability is open. We show that the Continuous Skolem Problem lies at the heart of many natural verification questions on linear dynamical systems, such as continuous-time Markov chains and linear hybrid automata. We describe some recent work that uses results in transcendence theory and real algebraic geometry to obtain decidability for certain variants of the problem. In particular, we consider a bounded version of the Continuous Skolem problem, corresponding to time-bounded reachability. We prove decidability of the bounded problem assuming Schanuel's conjecture, one of the main conjectures in transcendence theory. We describe some partial decidability results in the unbounded case and discuss mathematical obstacles to proving decidability of the Continuous Skolem Problem in full generality. This is joint work with Ventsislav Chonev and Joel Ouaknine.

### Generalized Probabilistic Satisfiability

We analyze the GenPSAT problem, which is an extension of the probabilistic satisfiability problem to linear combinations of probabilistic propositional formulas. GenPSAT is shown to be NP-complete and a reduction to the Mixed Integer Programming problem is explored. Furthermore, we present a GenPSAT solver and analyze the presence of phase transition behaviour. This is joint work with Filipe Casal and Carlos Caleiro.

### Combined logics: characterizing mixed reasoning and applications

Fibring is a powerful mechanism for combining logics, and an essential tool for designing and understanding complex logical systems. Given two logic systems L1 and L2, their fibring L1 • L2 is the smallest logical system for the combined language which contains both L1 and L2. From the point of view of Hilbert systems, this corresponds precisely to putting together the Hilbert systems of the two logics, allowing instantiations with formulas in the joint signature along the derivations. In this presentation we give a full characterization of the consequence relation that emerges from fibring in the particular case where the logics being combined do not share any connectives. We show the power of this tool by presenting various applications on important problems regarding combined logics: (1) conservativity, (2) decidability and complexity, and (3) the preservation of many-valuedness. Joint work with C. Caleiro.

### Gödel's functional (‘dialectica’) interpretation

In this talk we start by describing weakly extensional Heyting arithmetic in all finite types. We then present Gödel's functional or ‘dialectica’ interpretation of arithmetic, and discuss its soundness proof and the circumstances in which a formula is equivalent to its interpretation.

Older session pages: Previous 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Oldest

Funded under the scope of UID/EEA/50008/2013.