Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
Published:https://doi.org/10.1098/rsta.2015.0408

    Modern society is faced with a fundamental problem: the reliability of complex, evolving software systems on which society critically depends cannot be guaranteed by the established, non-mathematical computer engineering techniques such as informal prose specification and ad hoc testing. The situation is worsening: modern companies are moving fast, leaving little time for code analysis and testing; the behaviour of concurrent and distributed programs cannot be adequately assessed using traditional testing methods; users of mobile applications often neglect to apply software fixes; and malicious users increasingly exploit even simple programming errors, causing major security disruptions. Building trustworthy, reliable software is becoming harder and harder to achieve, while new business and cybersecurity challenges make it of escalating critical importance.

    The challenge is to bring program specification and verification to the heart of the software design process. Most code validation is based on outdated ideas of trusting that the internal, unpublished procedures of a company are robust and the assumption that the developer is not malicious. High-grade industry players, such as aerospace companies, do use sophisticated internal processes and tools to establish some degree of trust in their code, which can then be certified by government bodies such as the UK National Cyber Security Centre. We should, however, be able to do better than this. Software should be judged on fundamental scientific principles, with proper answers to questions such as: ‘What does this software do and what does it not do?’; ‘Are the behaviours exhibited by the software the ones we want?’ and ‘How do we assess that the software does what it says it does?’. It should be possible to bring proper, mathematically rigorous, scientific method to our software development, in line with standard engineering practice.

    The specialist academic and industrial research community is ready to tackle this challenge: proof assistants are mature; symbolic testing and verification techniques and tools are tractable; and real-world programs are specified and verified in academia and industry. On the one hand, academics and software companies, such as Facebook, Amazon and Google, are developing industrial-strength symbolic testing and verification tools that help the general programmer uncover simple bugs, such as memory overruns and null-pointer exceptions. Such tool development is, however, currently limited. Indeed, Facebook has the saying ‘this journey is 1% finished’ (Facebook Infer tool for C and Java, open-source announcement, 2015). On the other hand, academics and specialist software companies are building fully specified and verified software systems: Altran UK have delivered fully verified software for UK air-traffic control with contractual guarantees to fix any bugs found; the DARPA $70M project ‘HACMS: High-Assurance Cyber Military Systems’ have developed fully verified system software stacks for autonomous vehicles which passed strict security testing with flying colours; and the DeepSpec project is a $10M NSF expedition in science to develop the specification, testing and verification of the system software stack and to create an undergraduate curriculum to train future software engineers in specification and verification. These examples focus on closed software, working with modest-scale software systems, specialist programming languages, academic idealisations, and techniques that require highly specialised skills. They cannot handle mainstream software developed by industrial programmers with different ranges of expertise.

    The Royal Society scientific meeting on ‘Verified Trustworthy Software Systems’ was unique, bringing together verification experts from academia and industry, systems and security experts interested in verification, industrial scientists and engineers wanting to use verification, and employees of government bodies thinking about cybersecurity and the certification of software. This meeting spearheaded a new five-year Research Institute in Verified Trustworthy Software Systems, funded by the Engineering and Physical Research Council, with the remit to advocate for analysis and verification in the UK and internationally. The speakers were international experts in systems, security and verification, with the remit to present new long-term verification challenges, describe real-world software verification projects, discuss new application areas, and assess how to trust verified software. The speakers, listed in order of their presentations at the meeting, were:

    • —Sir Tony Hoare FRS from Microsoft Research (MSR), Cambridge, a founder of the field of program verification, who opened and closed the meeting;

    • —J Strother Moore [1] from the University of Texas, who pioneered tractable verification in industry by developing ACL2 and used it to verify floating-point arithmetic in AMD processors;

    • —Cédric Fournet from MSR Cambridge and the MSR-INRIA Joint Centre in Paris, who developed the verified reference implementation, miTLS, for the TLS protocol;

    • —Kathleen Fisher [2] from Tufts University, USA, who created the DARPA project HACMS;

    • —Neil White [3], head of engineering at Altran UK;

    • —Daniel Kroening [4] from Oxford, who built the CBMC model-checker, and is the CEO and founder of the start-up DiffBlue;

    • —Gerwin Klein [5] from Data61 and the University of New South Wales in Australia, who developed the verified kernel seL4, which underpins the HACMS project, with his colleague Gernot Heisser;

    • —Nickolai Zeldovich from MIT, who has developed a verified operating system with his colleague Adam Chlipala [6] from the DeepSpec project;

    • —Mark Batty [7] from the University of Kent, who has developed weak memory models for x86, Power and ARM CPUs, and has significantly influenced the C11 language specification with Peter Sewell at Cambridge;

    • —Peter O'Hearn from Facebook London and University College London, who leads the team working on the Facebook Infer static analysis tool;

    • —Philippa Gardner from Imperial College London, who works on the fundamental theory and prototype tools for the specification and verification of concurrent and web programs;

    • —Fred Schneider from Cornell University, who is an expert in cybersecurity and previously worked in verification;

    • —Marco Pistoia from IBM Research in New York, who combines static analysis and machine learning to provide information-flow security;

    • —Michael Backes from Saarland University in Saarbrucken, who works on information security and privacy and is part of the ERC Synergy Grant imPACT;

    • —Alexey Gotsman from IMDEA in Madrid, who works on the fundamental theory of distributed systems;

    • —Xavier Leroy from INRIA in Paris, who developed the efficient verified compiler CompCert, which underpins the HACMS project; and

    • —Greg Morrisett from Cornell University, who aims to build secure, reliable, high-performance software systems.

    In this publication, some of the speakers and their co-authors describe their current and future research ideas, aiming to make their papers accessible to a general audience interested in software verification. There are some absolute gems. Enjoy reading!

    Acknowledgements

    I would like to thank all the speakers and people in the audience for making the Royal Society scientific meeting on ‘Verified Trustworthy Software Systems’ such a fun meeting.

    Data accessibility

    This article has no additional data.

    Competing interests

    I declare I have no competing interests.

    Funding

    I received no funding for this study.

    Footnotes

    One contribution of 8 to a discussion meeting issue ‘Verified trustworthy software systems’.

    Published by the Royal Society. All rights reserved.

    References