14/11/2024, 17:00 — 18:00 — Room P3.10, Mathematics Building Online
Sandra Alves, Departamento de Ciência de Computadores, Faculdade de Ciências, UPorto
Quantitative Weak Linearisation
Weak linearisation was defined years ago through a static characterisation of the intuitive notion of virtual redex, based on (legal) paths computed from the (syntactical) term tree. Weak-linear terms impose a linearity condition only on functions that are applied (consumed by reduction) and functions that are not applied (therefore persist in the term along any reduction) can be non-linear. This class of terms was shown to be strongly normalising with deciding typability in polynomial time. We revisit this notion through non-idempotent intersection types (also called quantitative types). By using an effective characterisation of minimal typings, based on the notion of tightness, we are able to distinguish between “consumed” and “persistent” term constructors, which allows us to define an expansion relation, between general lambda-terms and weak-linear lambda-terms, whilst preserving normal forms by reduction.
26/06/2024, 17:00 — 18:00 — Online
Bruno Dinis, Department of Mathematics, UÉvora
Nonstandard Analysis meets Philosophy
Nonstandard analysis (NSA), founded by Abraham Robinson in the 1960s, was to a great extent inspired by Leibniz’s ideas and intuitions towards the use of infinitesimal and infinitely large quantities. One of the greatest features of NSA is that, by allowing a correct formulation of infinitesimals, one is now able to reason using orders of magnitude. This means that one can give precise meaning, and reason formally, about otherwise vague terms such as "small" or "large". Recently, accounts of vagueness relying on NSA were introduced [2, 8, 4]. In particular, and unlike other accounts of vagueness, the so-called nonstandard primitivist account [4, 5] embraces transitivity for marginal differences (i.e. "small" differences), but not for large differences in a soritical series. Nonstandard primitivism also seems to be particularly adequate to deal with the ship of Theseus paradox [3, 6] and may also shed some light in doxastic reasoning by considering infinitesimal probabilities and associating them to infinitesimal credences [1, 7]. We aim at assessing the relative merits of nonstandard primitivism and to show some lines of future research regarding the connections between NSA and philosophy.
(This is joint work with Bruno Jacinto)
- Benci, Vieri and Horsten, Leon and Wenmackers, Sylvia. Infinitesimal Probabilities. The British Journal for the Philosophy of Science, 69 (2): 509–552, 2018.
- Walter Dean. Strict finitism, feasibility, and the sorites. Review of Symbolic Logic, 11 (2):295–346, 2018.
- Bruno Dinis. Equality and near-equality in a nonstandard world. Log. Log. Philos., 32 (1):105–118, 2023. ISSN 1425-3305,2300-9802.
- Bruno Dinis and Bruno Jacinto. A theory of marginal and large difference. Erkenntnis, 2023.
- Bruno Dinis and Bruno Jacinto. Marginality scales for gradable objects. (preprint), 2023.
- Bruno Dinis and Bruno Jacinto. Counterparts as Near-equals. (preprint), 2024.
- Kenny Easwaran. Regularity and Hyperreal Credences. Philosophical Review, 123 (1):1- 41, 2014.
- Yair Itzhaki. Qualitative versus quantitative representation: a non-standard analysis of the sorites paradox. Linguistics and Philosophy, 44:1013–1044, 2021.
16/05/2024, 17:00 — 18:00 — Online
Alessandro Gianola, Department of Computer Science and Engineering and INESC, Técnico ULisboa
Uniform Interpolation in Formal Verification
In the last two decades, Craig interpolation has emerged as an essential tool in formal verification, where first-order theories are used to express constraints on the system, such as on the datatypes manipulated by programs. Interpolants for such theories are largely exploited as an efficient method to approximate the reachable states of the system and for invariant synthesis. In this talk, we report recent results on a stronger form of interpolation, called uniform interpolation, and its connection with the notion of model completion from model-theoretic algebra. We discuss how uniform interpolants can be used in the context of formal verification of infinite-state systems to develop effective techniques for computing the reachable states in an exact way. Finally, we present some results about the transfer of uniform interpolants to theory combinations. We argue that methods based on uniform interpolation are particularly effective and computationally efficient when applied to verification of the so-called data-aware processes: these are systems where the control flow of a process can interact with a data storage.
11/04/2024, 17:00 — 18:00 — Online
Bruno Loff, Department of Mathematics and LASIGE, Ciências ULisboa
Why is solving the P vs NP problem so hard?
I will give a board-and-chalk-and-informal- talk presentation about the difficulty of proving lower-bounds in computational complexity.
The P vs NP problem is one of the most famous unsolved problems in mathematics. One may phrase the P vs NP question in various equivalent ways. One way, which is not completely equivalent, but almost, is the following ("P/poly vs NP problem"). Does there exist a small Boolean circuit which solves the CLIQUE problem? I.e., does there exist a $\operatorname{poly}(N)$-size Boolean circuit which, when given as input the $N\times N$ adjacency matrix of an undirected graph, decides whether the graph has a clique of size $N/2$?
Complexity theorists, me included, believe that the answer is no. We believe that there exists a super-polynomial "lower-bound" on the complexity of CLIQUE. Many people have tried proving such a lower-bound, and so far all have failed. But why? Why is the problem so difficult?
In the late 1980s, Alexander Razborov proved that there exist no $\operatorname{poly}(N)$-size "monotone" circuits for solving CLIQUE. Namely, if we forbid the Boolean circuit from doing negations, so they can only do ORs and ANDs, then polysize circuits cannot solve the CLIQUE problem. He (and many others) then tried to prove the same result for ordinary circuits (with negation gates). And he failed (and they all failed, too). But along the way he (and many others) proved many different lower-bounds. Lower-bounds for simpler kinds of circuits (e.g. constant-depth), lower-bounds for communication protocols (a different but related computational model), and lower-bounds for other models. Razborov proved these lower-bounds, and he also thought long and hard about why lower-bounds against Boolean circuits were so difficult to prove.
In 1994, Razborov and Stephen Rudich presented their paper, "natural proofs", which had a very reasonable explanation for why circuit lower-bounds were difficult to prove. They showed, remarkably, that every single lower-bound proof that was known at the time had a certain "logical structure" (or could be made to have such a structure by small changes to the argument). This logical structure made the proof very simple and natural, and they called proofs with such a structure "natural proofs". Then they showed that super-polynomial lower-bounds on CLIQUE cannot be shown using natural proofs, unless certain cryptographic primitives, such as factoring, are unsecure. This is a kind of informal independence result. (Based on the natural proofs result, Razborov also later proved formal independence results, showing that P vs NP is independent of certain weak systems of arithmetic, but I do now know the details of those.)
In one fell swoop, Razborov and Rudich ruled out every single lower-bound technique known at the time, saying: these techniques are not enough to solve the P vs NP problem (unless cryptography is insecure). To a very large extent this barrier still applies today, as almost all the lower-bound proofs that we know are natural proofs, i.e., they have the very same logical structure as the proofs known since the 1980s.
In this seminar, I will explain what is a "natural proof", and why it is reasonable to expect that no natural proof can solve the P vs NP problem. Only a few words will be said about some of my research and how it connects to this topic.
27/02/2023, 17:30 — 18:30 — Online
Walter A. Carnielli, Departamento de Filosofia e Centro de Lógica, Epistemologia e Ciências Humanas – UNICAMP, Brasil
A engenharia e a matemática são humanas, assim como as humanidades devem ser engenhos: juntando o que não deveria ser separado
As raízes históricas da divisão entre engenharia, matemática e ciências naturais, por um lado, e ciências humanas e artes, por outro, remontam ao período Renascentista e Iluminista, com herança da divisão entre o Quadrivium, com aritmética, geometria, música e astronomia, e o Trivium, enfatizando as humanidades. O interesse em classificar e organizar o conhecimento levou à divisão entre diferentes campos e disciplinas. As ciências naturais e a matemática foram vistas como objetivas e baseadas em princípios universais, enquanto as ciências humanas e as artes foram vistas como subjetivas e baseadas em opiniões e interpretações pessoais. Mas isso tudo está mudando com velocidade espantosa, como comprovam o Metaverso, o GPT3, e o ChatGPT, e o Unreal Engine, a avançadíssima ferramenta de cinema e criação 3D em tempo real para experiências imersivas e desenvolvimento para novos produtos comerciais. Os engenheiros deveriam se debruçar em conhecimentos e insights das artes, literatura e produção de filmes para criar produtos e sistemas mais atraentes e fáceis de usar. Por exemplo, entender técnicas de narrativa [4] pode influenciar o design de jogos eletrônicos e interfaces de usuário. Da mesma forma, conhecimentos de psicologia e sociologia humana podem ajudar a criar produtos e sistemas mais úteis e benéficos [3]. Por outro lado, os filósofos e cientistas humanos podem se beneficiar prestando atenção à engenharia e à matemática [1], ganhando uma melhor compreensão das tecnologias e sistemas que moldam nossas vidas. Alguns autores já se referem à teoria do processo dual, combinando o pensamento intuitivo, que é pensamento associativo, automático, paralelo e subconsciente, e pensamento deliberativo, que é baseado em regras, serial e consciente, como o que propõe o programa de Lógica Computacional [2]. Pretendo discutir estas ideias e alguns casos reais para ilustrar que para além das das questões práticas, o tema envolve tópicos de interesse filosófico e de metodologia.
- Walter Carnielli. How AI can be surprisingly dangerous for the philosophy of mathematics — and of science. Circumscribere, Vol. 27, 2021.
- Robert Kowalski. Computational Logic and Human Thinking: How to Be Artificially Intelligent Illustrated Edition. Cambridge University Press, 2011.
- P. N. Johnson-Laird, S. Khemlani e G. P. Goodwin. Logic, probability, and human reasoning. Trends in Cognitive Sciences 19(4) 2015, p. 201-214.
- Robert J. Shiller. Narrative Economics: How Stories Go Viral and Drive Major Economic Events. Princeton University Press, 2020.
15/11/2022, 17:30 — 18:30 — Room P3.10, Mathematics Building Online
Olga Pombo, Faculdade de Ciências, Universidade de Lisboa
Leibniz' rank in the History of Logic
Leibniz é universalmente reconhecido como um dos mais importantes precursores da lógica moderna. Procuraremos percorrer os grandes temas do pensamento lógico de Leibniz, apurar o sentido de que eles se revestem face aos desenvolvimentos científicos e filosóficos que lhes servem de fundo (nomeadamente, no que se refere às dificuldades da lógica formal tradicional, ao intuicionismo dogmático de Descartes e à exemplaridade da matemática). Procuraremos ainda sinalizar a especificidade dos projectos leibnizianos da Mathesis Universalis e de uma extensão máxima da ideia de lógica enquanto Generalissima Ars Cogitandi, identificar os fundamentos do seu projecto de uma lógica formalizada, algorítmica e mecanizável e, finalmente, discutir as potencialidades e os limites dos esforços e realizações de Leibniz no campo da construção de uma linguagem simbólica geral, categórica, completa e universal. Teremos ainda oportunidade de salientar de que modo a concepção cognitiva da linguagem de Leibniz faz dele um pioneiro da teoria da computação.
27/10/2022, 16:30 — 17:30 — Room P3.10, Mathematics Building
António Zilhão, Faculdade de Letras, Universidade de Lisboa
Conceito e Lógica em Frege
Gottlob Frege (1848-1925) é conhecido nos meios lógico-filosóficos sobretudo por ter protagonizado três feitos, todos eles notáveis: ter introduzido na Lógica moderna uma teoria formal da quantificação; ter concebido um sofisticado programa logicista no âmbito da Filosofia da Matemática; e ter desenvolvido um género de investigação com um forte conteúdo semântico que inspirou (de um modo algo equívoco, há que dizê-lo) o advento do chamado linguistic turn, operado pela Filosofia Analítica na segunda metade do século XX. Há, todavia, um outro feito igualmente notável protagonizado por Frege no âmbito do seu trabalho, mas que costuma ser bastante menos mencionado do que os anteriores: ter proposto uma visão inteiramente nova do conceito. Na realidade, este último feito não só não é menos relevante do que os anteriores como se encontra, para o bem e para o mal, subjacente aos mesmos. Nesta comunicação, irei tentar justificar esta tese, salientando a novidade e a originalidade do tratamento fregeano do conceito, bem assim como o seu alcance lógico-filosófico.