Actas del congreso
En esta charla discutimos la relación entre distintos artefactos semánticos del cálculo lambda, poniendo especial atención en una estrategia normalizante en el cálculo lambda value de Plotkin, a la que llamamos strict normal order.
The present Master's thesis seeks the development and analysis of static partial order reduction techniques for the models of probabilistic systems. The properties of those systems can be verified via model checking techniques. Model checking suffers from the problem known as State Space Explosion, which can make the verification process intractable. Partial order reductions are aimed at alleviating that problem. As an outcome of the work carried out for the elaboration of current thesis, two new static partial order techniques, named Naïve SPOR and Reachability-Aware SPOR, were defined. A recommendation of the situations in which each of them should be used is provided. The latter achieves a better reduction than the former when the system to be verified is not probabilistic or when the property to be checked can be expressed in a linear temporal logic. The software tool known as LiQuor model checker was extended in order to be able to execute both reduction techniques. Those techniques were utilized for the reduction of the models of some classical concurrent systems. Several properties of the reduced and unreduced models were verified using symbolic and explicit model checking techniques. As a result of the analysis over the experiments, it is concluded that static partial order techniques should be more conveniently used in combination with symbolic model checking than with explicit model checking.
Theorem provers based on dependent types rely on efficient full-reducers implementing a beta-equivalence tester for their typing rules. We present a full-reducing strategy in a strict calculus allowing open terms. We traverse a path, starting at weak-reduction in a non-strict calculus restricted to closed terms (call-by-name). By successively strengthening the congruence rules (reducing under lambda abstractions), assuring normalisation (resorting to hybrid strategies to find a normal form, if it exists), restricting the notion of reduction (allowing only values as arguments) and generalising the set of legal terms (including open terms) we get a normalising strategy in Plotkin's lambda_V theory. We show how these four different aspects (strengthening congruence, normalisation, by-value semantics and open terms) are entangled in this traversal. Meta-theoretic issues regarding standardisation and confluence lead the way to full-reduction.
Coding rules are used in industry to enforce good software practices that improve software reliability and maintainability. Some examples of standard rule sets are MISRA-C/C++ and High Integrity C++. In a previous work we developed a framework to formalize coding rules as logic programs, that can be later run on a knowledge base of the program to inspect. In order to formalize all the rules in standard sets the knowledge base needs to contain information about the program that ranges from syntactic to undecidable semantic properties. As many of the complex properties that coding rules depend upon (or approximations to them) are computed by static analyzers present in the literature, we aim at reusing them presenting their result in a uniform way that can be easily understood and combined by engineers responsible for defining new rules. In fact, some of those analyzes are implemented in modern compilers, but their results are almost exclusively applied to optimization. In this talk we present initial work on trying to use the LLVM compiling infrastructure (http://llvm.org) for enriching our program facts base with pointer aliasing information, and how this information can be used to define and enforce actual coding rules.
In this paper we present a work in progress on the formal verification of a process supervisor using the McErlang model checker. The process supervisor is an alternative implementation of the standard supervisor behaviour of Erlang/OTP. This implementation, currently employed at the company LambdaStream, was checked against several safety and liveness properties.
We present the RFuzzy framework, a Prolog-based tool for representing and reasoning with fuzzy information. The advantages of our framework in comparison to previous tools along this line of research are its easy, user-friendly syntax, and its expressivity through the availability of default values and types. In this approach we describe the formal syntax, the operational semantics and the declarative semantics of RFuzzy (based on a lattice). A least model semantics, a least fixpoint semantics and an operational semantics are introduced and their equivalence is proven. We provide a real implementation that is free and available. (It can be downloaded from http://babel.ls.fi.upm.es/software/rfuzzy/.) Besides implementation details, we also discuss some actual applications using RFuzzy.
Keywords: Fuzzy Logic, Logic Programming Application, Knowledge Representation and Reasoning, Semantics, Implementation
We present a framework for the representation and resolution of first-order unification problems and their abstract syntax in a variable-free relational formalism which is an executable variant of the Tarski-Givant relation algebra and of Freyd's allegories restricted to the fragment necessary to compile and run logic programs. We develop a decision procedure for validity of relational terms, which corresponds to solving the original unification problem. The decision procedure is carried out by a conditional relational-term rewriting system. There are advantages over classical unification approaches. First, cumbersome and underspecified meta-logical procedures (name clashes, substitution, etc.) and their properties (invariance under substitution of ground terms, equality's congruence with respect to term forming, etc.) are captured algebraically within the framework. Second, other unification problems can be accommodated, for example, existential quantification in the logic can be interpreted as a new operation whose effect is to formalize the costly and error prone handling of fresh names (renaming apart).
This file was generated by bibtex2html 1.98.