Although the relevance of fuzzy information to represent concepts of real life is evident, almost all databases contain just crisp information. The main reason for this, apart from the tradition, is that fuzzy information is most of the times subjective and storing all users points of view is unfeasible. Allowing fuzzy concepts in the queries increases the queries' expressiveness and asking for cheap products, big size, close hotels, etc is much more interesting that asking for products with a price under X, of the size Y, hotels at most X kilometers far, etc. The way we propose for achieving this more expressive databases' queries is adding to the basic knowledge offered by a database (e.g. distance to hotel is 5 km) the link between this crisp concept and multiple fuzzy concepts that we use in real life (e.g. close hotel). We present FleSe, a framework for searching databases in a flexible way, thanks to the fuzzy concepts that we can define. In this paper we describe the easy procedure that let us define fuzzy concepts and link them to crisp database fields.
Keywords: Databases, Fuzzy Logic, Search Engine
Agent-oriented programming languages like Jason (and its interpreter) facilitate programming mul- tiagent systems because they provide the necessary infrastructure. However, several of these agent- oriented languages are not entirely self-contained. Programmers may have to write code in the low- level language used to implement the agent-oriented programming platform to implement some de- sired agent behaviour. For example, Jason programmers write Java code to modify the treatment of unexpected goals in the Jason reasoning cycle, or to fix the intention execution order. In this paper, we discuss the problems such a dependence on underlying implementation lan- guages presents to an agent language, and in the case of Jason we propose a solution based on extending the language with new syntactic constructs for more precise control of the execution flow of Jason agents. These new control mechanisms have been implemented in our Jason interpreter eJason.
The Internet has become a place where massive amounts of information and data are being generated every day. This information is most of the times stored in a non-structured way, but the times it is structured in databases it cannot be retrieved by using easy fuzzy queries. Being the information in the database the distance to the city center of some restaurants (and their names) by easy fuzzy queries we mean queries like "I want a restaurant close to the center". Since the computer does not have knowledge about the relation between being close to the center and the distance to the center (of a restaurant) it does not know how to answer this query by itself. We need human intervention to tell the computer from which database column it needs to retrieve data (the one with the restaurant's distance to the center), and how this non-fuzzy information is fuzzified (applying the close function to the retrieved value). Once this is done it can give an answer, just ordering the database elements by this new computed attribute. This example is very simple, but there are others not so simple, as "I want a restaurant close to the center, not very expensive and whose food type is mediterranean". Doing this for each existing attribute does not seem to be a very good idea. We present a web interface for posing fuzzy and flexible queries and a search engine capable of answering them without human intervention, just from the knowledge modelled by using the framework's syntax. We expect this work contributes to the development of more human-oriented fuzzy search engines.
Keywords: Internet;database management systems;fuzzy set theory;query processing;search engines;Internet;Web interface;database column;database elements;flexible searches;framework syntax;fuzzy queries;human intervention;human-oriented fuzzy search engines;nonfuzzy information;real-world knowledge;Cities and towns;Computers;Databases;Fuzzy logic;Search engines;Semantics;Syntactics
This thesis studies full reduction in lambda calculi. In a nutshell, full reduction consists in evaluating the body of the functions in a functional programming language with binders. The classical (i.e., pure untyped) lambda calculus is set as the formal system that models the functional paradigm. Full reduction is a prominent technique when programs are treated as data objects, for instance when performing optimisations by partial evaluation, or when some attribute of the program is represented by a program itself, like the type in modern proof assistants. A notable feature of many full-reducing operational semantics is its hybrid nature, which is introduced and which constitutes the guiding theme of the thesis. In the lambda calculus, the hybrid nature amounts to a 'phase distinction' in the treatment of abstractions when considered either from outside or from inside themselves. This distinction entails a layered structure in which a hybrid semantics depends on one or more subsidiary semantics. From a programming languages standpoint, the thesis shows how to derive implementations of full-reducing operational semantics from their specifications, by using program transformations techniques. The program transformation techniques are syntactical transformations which preserve the semantic equivalence of programs. The existing program transformation techniques are adjusted to work with implementations of hybrid semantics. The thesis also shows how full reduction impacts the implementations that use the environment technique. The environment technique is a key ingredient of real-world implementations of abstract machines which helps to circumvent the issue with binders. From a formal systems standpoint, the thesis discloses a novel consistent theory for the call-by-value variant of the lambda calculus which accounts for full reduction. This novel theory entails a notion of observational equivalence which distinguishes more points than other existing theories for the call-by-value lambda calculus. This contribution helps to establish a 'standard theory' in that calculus which constitutes the analogous of the 'standard theory' advocated by Barendregt in the classical lambda calculus. Some proof-theoretical results are presented, and insights on the model-theoretical study are given.
This article describes a systematic approach to testing behavioural aspects of Web Services that communicate using the JSON data format. As a key component, the Quviq QuickCheck property-based testing tool is used to automatically generate a large number of test cases from an abstract description of the service behaviour in the form of a finite state machine. The same behavioural description is also used to decide whether the execution of a test case is successful or not. To generate random JSON data for populating tests we have developed a new library, jsongen, which given a characterisation of the JSON data as a JSON schema, (i) automatically derives a QuickCheck generator which can generate an infinite number of JSON values that validate against the schema, and (ii) provides a generic QuickCheck state machine which is capable of following the (hyper)links documented in the JSON schema, to automatically explore the web service. The default behaviour of the state machine can be easily customized to include web service specific checks. The article illustrates the approach by developing a finite state machine model for the testing of a JSON-based web service.
This article describes a systematic approach to testing behavioural aspects of Web Services that communicate using the JSON data format. As a key component, the Quviq QuickCheck property-based testing tool is used to automatically generate a large number of test cases from an abstract description of the service behaviour in the form of a finite state machine. The same behavioural description is also used to decide whether the execution of a test case is successful or not. To generate random JSON data for populating tests we have developed a new library, jsongen, which given a characterisation of the JSON data as a JSON schema, automatically derives a QuickCheck generator which is capable of generating an infinite number of JSON values that validate against the schema. The article illustrates the approach by developing a finite state machine model for the testing of a stateful JSON-based web service.
Testing is a crucial aspect of the development of dependable embedded systems, and therefore a significant effort is put into researching and developing efficient testing techniques. However, testing is not normally taught in specific courses at many universities, but rather as a peripheral activity to programming.In this paper, we report on three separate experiences at teaching an advanced testing technique, property-based testing, and a supporting tool, QuviQ QuickCheck, to both undergraduate and master students.
The paper describes an approach to testing a class of safety-critical concurrent systems implemented using shared resources.Shared resources are characterized using a declarative specification, from which both an efficient implementation can be derived, and which serves as the first approximation of the state-based test model used for testing an implementation of the resource.
In this article the methodology is illustrated by applying it to the task of testing the safety-critical software that controls an automated shipping plant, specified as a shared resource, which serves shipping orders using a set of autonomous robots. The operations of the robots are governed by a set of rules limiting the weight of robots, and their cargo, to ensure safe operations.
Validation of a system design enables to discover specification errors before it is implemented (or tested), thus hopefully reducing the development cost and time. The Unified Modelling Language (UML) is becoming widely accepted for the early specification and analysis of requirements for safety-critical systems, although a better balance between UML's undisputed flexibility, and a precise unambiguous semantics, is needed. In this paper we introduce , a tool that is capable of executing and formally verifying UML diagrams (namely, UML state machine, class and object diagrams) by means of a translation of its behavioural information into Erlang. The use of the tool is illustrated with an example in embedded software design.
Special issue edited by Gibbons and Nogueira that contains extended versions of selected papers from the 11th International Conference on Mathematics of Program Construction - MPC'12
Siek and Garcia (2012) have explored the dynamic semantics of the gradually-typed lambda calculus by means of definitional interpreters and abstract machines. The correspondence between the calculus's mathematically described small-step reduction semantics and the implemented big-step definitional interpreters was left as a conjecture. We prove and generalise Siek and Garcia's conjectures using program transformation. We establish the correspondence between the definitional interpreters and the reduction semantics of a closure-converted gradually-typed lambda calculus that unifies and amends various versions of the calculus. We use a layered approach and two-level continuation-passing style so that the correspondence is parametric on the subsidiary coercion calculus. We have implemented the whole derivation for the eager error-detection policy and the downcast blame-tracking strategy. The correspondence can be established for other choices of error-detection policies and blame-tracking strategies, by plugging in the appropriate artefacts for the particular subsidiary coercion calculus.
This file was generated by bibtex2html 1.98.