Abstracts
Model-based information flow
analysis to support software certification
Leon Moonen (Simula
Research Laboratory)
[slides]
The research presented in this talk is part of an ongoing industrial
collaboration with Kongsberg Maritime (KM), one of the largest suppliers
of maritime systems worldwide. The division that we work with
specializes in computerized systems for safety monitoring and automatic
corrective actions on unacceptable hazardous situations. The overall
goal of the collaboration is to provide our partner with software
analysis tooling that provides source based evidence to support software
certification. In particular, we study a family of complex
safety-critical embedded software systems that connect software control
components to physical sensors and mechanical actuators.
A frequently advocated approach to manage the development of such
complex software systems is to compose them from reusable components,
instead of starting from scratch. Components may be implemented in
different programming languages and are tied together using
configuration files, or glue code, defining instantiation,
initialization and interconnections. Although correctly engineering the
composition and configuration of components is crucial for the overall
behavior, there is surprisingly little support for incorporating this
information in the static verification and validation of these systems.
Analyzing the properties of programs within closed code boundaries has
been studied for some decades and is well-established.
The presentation will discuss the techniques we developed to support
analysis across the components of a component-based system. We build
upon OMG's Knowledge Discovery Metamodel to reverse engineer
fine-grained homogeneous models for systems composed of heterogeneous
artifacts. Next, we track the information flow in these models using
slicing, and apply several transformations that enable us to visualize
the information flow at various levels of abstraction, trading off
between scope and detail and aimed to serve both safety domain experts
as well as developers. Our techniques are implemented in a prototype
toolset that has been successfully used to answer software certification
questions of our industrial partner.
Verifying Programs by Evolving
(Under)-Approximations
Arie Gurfinkel (Carnegie Mellon University)
[slides]
Program verification, i.e., synthesis of safe inductive invariants, is
a long standing challenge in Computer Science. While the problem is
undecidable in general, there are many existing automated techniques
that perform well in practice. In this talk, I will describe our
recent work on verifying programs by a series of evolving
approximations. The basic idea is that instead of verifying a target
program P, we verify its "simpler-to-verify" approximation A, and
adapt the proof or the counterexample to safety of A to P. Unlike many
other existing approaches, our approximations are neither over- nor
under-approximations of the target program. I will present two
variants of this idea, for synthesizing invariants over linear
arithmetic and over the theory of bit-vectors, respectively. The talk
is based on the joint work with Anvesh Komuravelli, Anton Belov, Sagar
Chaki, Joao Marques-Silva, and Edmund M. Clarke
Program Repair without Regret
Barbara Jobstmann
(Verimag)
[slides]
When fixing programs, we usually fix bugs one by one; at the same time,
we try to leave as many parts of the program unchanged as possible. In
this talk, I will present a new approach to repair reactive programs
with respect to a specification that aims to support this incremental
method. The specification is given in linear-temporal logic. Like in
previous approaches, we aim for a repaired program that satisfies the
specification and is syntactically close to the faulty program. The
novelty of our approach is that it produces a program that is also
semantically close to the original program by enforcing that a subset of
the original traces is preserved. Intuitively, the faulty program is
considered to be a part of the specification, which enables us to
synthesize meaningful repairs, even for incomplete specifications. Our
approach is based on synthesizing a program with a set of behaviors that
stay within a lower and an upper bound. We provide an algorithm to
decide if a program is repairable with respect to our new notion, and
synthesize a repair if one exists. We analyze several ways to choose the
set of traces to leave intact and show the boundaries they impose on
repairability. We have evaluated the approach on several small examples.
This is joint work with Christian von Essen.
Verification-Aided Regression Testing
Fabrizio
Pastore (Università degli Studi di Milano-Bicocca)
[slides]
This talk presents Verification-Aided Regression Testing (VART), a novel
extension of regression testing that uses model checking to increase the
fault revealing capability of existing test suites. The key idea in
VART is to extend the use of test case executions from the conventional
direct fault discovery to the generation of behavioral properties
specific to the upgrade, by (i) automatically producing properties that
are proved to hold for the base version of a program, (ii) automatically
identifying and checking on the upgraded program only the properties
that, according to the developers? intention, must be preserved by the
upgrade, and (iii) reporting the faults and the corresponding
counter-examples that are not revealed by the regression tests.
The talk will detail the empirical results obtained with both open
source and industrial software systems. These results show that VART
automatically produces properties that increase the effectiveness of
testing by timely and automatically detecting faults unnoticed by the
existing regression test suites.
The talk will finally include a live demo of the Eclipse plug-in that
implements VART. The demo will show how software engineers can identify
regression faults with VART and fix these faults by inspecting the
counter-examples produced by the tool.
Applications of Symbolic Finite Automata
Margus Veanes (Microsoft Research)
[slides]
Symbolic automata theory lifts classical automata theory to rich
alphabet theories. It does so by replacing an explicit alphabet with an
alphabet described implicitly by a Boolean algebra. How does this
lifting affect the basic algorithms that lay the foundation for modern
automata theory and what is the incentive for doing this? We investigate
these questions here from the point of view of program trace analysis
and how such analysis can be used to perform evolution analysis of code.
In our approach we use state-of-the-art constraint solving techniques
for automata analysis that are both expressive and efficient and can
handle very large alphabets.
Programming in Stages
K. Rustan M. Leino (Microsoft Research)
[slides]
A major issue facing software development and maintenance is the sheer
complexity of programs. Even software designs that start off simple
often evolve into programs that are both brittle and hard to understand.
When we explain a design or algorithm to a colleague, we do so by
gradually introducing more details in our descriptions. Wouldn't it be
nice if we could stage our programs in the same way?
In this talk, I consider the use of staging features in a programming
language, in particular the way they are designed into the programming
language Dafny, which is mostly based on the old theory of superposition
refinement. The hope is that such features would let us program in
stages, but putting the nice concepts into a language has been harder
than expected. Perhaps we don't yet have the right feature set?
Producing Effective Interpolants for SAT-based Incremental Verification and Upgrade Checking
Grigory
Fedyukovich (University of Lugano)
[slides]
Propositional interpolation is widely used as a means of abstraction to
achieve efficient SAT-based symbolic model checking. In particular, we
use it as a main concept in two software bounded model checking
techniques: 1) verification of a given source code incrementally with
respect to various properties, and 2) verification of software upgrades
with respect to a fixed set of properties.
In this talk, we describe the PeRIPLO framework, an interpolating
SAT-solver that implements a set of techniques to generate and
manipulate interpolants for different model checking tasks. Our
systematic experimental investigation shows that size and logical
strength of interpolants significantly affect the performance of
verification, and that these characteristics depend on the role
interpolants play in the verification process.
|