Confluence Competition

CoCo 2013

Results

... are available here. See also slides at the live competition.

Background

This site lists relevant information for the 2nd Confluence Competition (CoCo 2013). The competition will run live during the 2nd International Workshop on Confluence (IWC 2013), which is collocated with the 7th International Conference on Rewriting, Deduction, and Programming (RDP 2013) in Eindhoven, The Netherlands. In this competition categories for

will be run. Other categories will be considered if there are tools and problems. In particular we are seeking for tools for first-order conditional rewrite systems. The problems considered for CoCo 2013 are selected from Cops, a collection of confluence problems. Problems submitted after the problem selection deadline will not be considered for the competition.

Timeline

requests for new categoriesMarch 1, 2013
announcement of execution platformApril 1, 2013
tool registrationJune 10, 2013
tool submissionJune 20, 2013
problem selectionJune 25, 2013
competitionJune 28, 2013

Database

These problems will be considered for CoCo 2013.

Execution Platform

CoCo 2013 will run on StarExec, a high-end cross-community competition platform. For each problem a tool will have 60 seconds access to a single node (consisting of 2 quad-core Intel(R) Xeon(R) processors E5-2609 0 running at a clock rate of 2.40 GHz and 128 GB of main memory). See cpuinfo and meminfo for details. The 64-bit operating system is Red Hat Enterprise Linux Workstation release 6.3 with kernel 2.6.32-358.6.2.el6.x86_64.

Registration/Submission

Registration is via the contact email address. Every registration should also contain a one page system description highlighting the distinctive features of the prover. Tool submission is via StarExec.

Tools

We invite everyone to join CoCo. Tools must be able to read these problems as input. The output of the tools must contain an answer in the first line followed by some proof argument understandable for human experts. Valid answers are YES (the input is confluent) and NO (the input is not confluent). Every other answer is interpreted as the tool could not determine the status of the input.
For the certification category all tools capable of producing CPF output are first run on Cops. Then the successful proofs are considered for the certifiers.

It is encouraged that tools are open source and that binaries of the competition versions of the tools can be archived on the competition website.

The host may reject tools that are not running on the designated hardware.

Scoring

Every tool gets one point for each answer. The tool with the maximal score wins. In case of a draw there might be more winners. An answer is plausible if it was not falsified (automatically or manually). A tool with at least one non-plausible answer cannot be a winner.

Entrants

tooldivisionauthorsdescription
ACP 0.40 confluence tool Takahito Aoto (Tohoku University)
Yoshihito Toyama (Tohoku University)
pdf
CeTA 2.10 certification tool Julian Nagele (University of Innsbruck)
René Thiemann (University of Innsbruck)
pdf
CSI 0.3 confluence tool Harald Zankl (University of Innsbruck)
Bertram Felgenhauer (University of Innsbruck)
Benjamin Höller (University of Innsbruck)
Aart Middeldorp (University of Innsbruck)
pdf
Saigawa 1.5 confluence tool Nao Hirokawa (JAIST)
Dominik Klein
pdf

Organising Committee