...
 
Commits (26)

Too many changes to show.

To preserve performance only 1000 of 1000+ files are displayed.

......@@ -42,3 +42,40 @@ Thumbs.db
*~
*.tmp.*
*.pyc
# FD #
######
builds/*
# Meddly #
##########
src/search/Meddly/src/.libs/
src/search/Meddly/src/lib/
src/search/Meddly/autom4te.cache/
src/search/Meddly/include/meddly.h
src/search/Meddly/include/meddly_expert.hh
src/search/Meddly/include/meddly_expert.h
src/search/Meddly/lib/
src/search/Meddly/include/meddly.hh
src/search/Meddly/examples/Makefile
src/search/Meddly/src/Makefile
src/search/Meddly/src/Makefile.in
src/search/Meddly/src/libmeddly.la
src/search/Meddly/Makefile
src/search/Meddly/Makefile.in
src/search/Meddly/aclocal.m4
src/search/Meddly/libtool
src/search/Meddly/config.status
src/search/Meddly/ar-lib
src/search/Meddly/config.guess
src/search/Meddly/config.sub
src/search/Meddly/configure
src/search/Meddly/depcomp
src/search/Meddly/ltmain.sh
src/search/Meddly/m4/libtool.m4
src/search/Meddly/src/libcudd.a-stamp/
src/search/Meddly/tmp/
src/search/Meddly/src/.deps/*
src/search/Meddly/src/forests/.deps/*
src/search/Meddly/src/operations/.deps/*
src/search/Meddly/src/storage/.deps/*
Description
===========
This file lists all people that actively contributed to Fast Downward,
i.e. all people that appear in some commits in Fast Downward's history
(see below for a history on how Fast Downward emerged) or people that
influenced the development of such commits. Currently, this list is
sorted by the last year the person has been active, and in case of
ties, by the earliest year the person started contributing.
2003-2018 Malte Helmert
2010-2018 Jendrik Seipp
2010, 2011, 2013-2018 Silvan Sievers
2016-2018 Cedric Geissmann
2017-2018 Guillem Francès
2018 Patrick Ferber
2012-2017 Florian Pommerening
2015-2017 Manuel Heusner
2017 Daniel Killenberger
2008-2016 Gabriele Roeger
2016 Martin Wehrle, Yusra Alkhazraji
2013, 2015, 2016 Salome Eriksson
2014, 2015 Patrick von Reth
2015 Thomas Keller
2009-2014 Erez Karpas
2014 Robert P. Goldman
2010-2012 Andrew Coles
2010, 2012 Patrik Haslum
2003-2011 Silvia Richter
2009-2011 Emil Keyder
2010, 2011 Moritz Gronbach, Manuela Ortlieb
2011 Vidal Alcázar Saiz, Michael Katz, Raz Nissim
2010 Moritz Goebelbecker
2007-2009 Matthias Westphal
2009 Christian Muise
History
=======
The current version of Fast Downward is the merger of three different
projects:
* the original version of Fast Downward developed by Malte Helmert and
Silvia Richter
* LAMA, developed by Silvia Richter and Matthias Westphal based on the
original Fast Downward
* FD-Tech, a modified version of Fast Downward developed by Erez
Karpas and Michael Katz based on the original code
In addition to these three main sources, the codebase incorporates code
and features from numerous branches of the Fast Downward codebase
developed for various research papers. The main contributors to these
branches are Malte Helmert, Gabi Röger and Silvia Richter.
# Symple
Symple performs A* search based on Edge-Valued Multi-Valued
Decision Diagrams (EVMDDs). It supports zero, unit, constant and
state-dependent action costs.
Fast Downward is a domain-independent planning system.
For documentation and contact information see http://www.fast-downward.org/.
Symple is is built on top of
[Fast Downward](http://www.fast-downward.org/),
[SymBa](https://fai.cs.uni-saarland.de/torralba/software.html)
and [MEDDLY-0.18](https://meddly.sourceforge.io/).
Therfore, it it is made available under the GNU Public License (GPL).
The following directories are not part of Fast Downward as covered by this
license:
## 1. Compiling Symple
The command
```sh
$ git clone https://gkigit.informatik.uni-freiburg.de/dspeck/symple.git <dirname>
```
will create a clone of Symple master repository in directory "dirname".
To build Symple run the following command.
```sh
$ cd <dirname>
$ ./build
```
* ./src/search/ext
## 2. Running Symple
Symple is callable with several configurations.
For the rest, the following license applies:
### 2.1 Defaul Configuration (IPC-18)
Starting Symple with defaul configurations (bidirectional search, 100k node
limit per transition relation):
```sh
$ ./plan <domain.pddl> <problem.pddl> <plan-output-file>
```
Fast Downward is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later
version.
### 2.2 Defaul Configuration (SDAC)
Starting Symple with defaul configurations (bidirectional search, 100k node
limit per transition relation):
```sh
$ ./plan-sdac <problem.sas> <plan-output-file>
```
Fast Downward is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.
### 2.3 Predefined Configurations
Starting Symple with predefinied configurations:
```sh
$ .src/plan-ipc <planner> <domain> <problem> <plan-output-file>
You should have received a copy of the GNU General Public License along with
this program. If not, see <http://www.gnu.org/licenses/>.
```
You can choose between the following predefined configurations for the "planer"
parameter, where the term stands for the search direction and the number for
the node limit of the transition relations.
+ **Bidirectional Search**: symple, symple0, symple10000, symple25000, symple50000, symple100000, symple200000
+ **Progression**: symplePro, symplePro0, symplePro10000, symplePro25000, symplePro50000, symplePro100000, symplePro200000
+ **Regression**: sympleReg, sympleReg0, sympleReg10000, sympleReg25000, sympleReg50000, sympleReg100000, sympleReg200000
### 2.4 Userdefined Configurations
In general, it is possible to define new configurations for Symple. The easiest
way is to insert a new predined configuration to the [downward](src/search/downward) file.
For more information, please visit the [Fast Downward](http://www.fast-downward.org/)
website.
## 3 Benchmarks
+ [Here](https://bitbucket.org/planning-researchers/classical-domains), you can find a benchmark set containing of former IPC domains.
+ [Here](https://gkigit.informatik.uni-freiburg.de/dspeck/SDAC-Benchmarks), you can find a benchmark set containing of domains with state-dependent action costs. Note that each domain consists of a "trans_sas" and "sas" folder. Symple is compatible with the files in the "sas" folder.
Bootstrap: docker
From: ubuntu:bionic
%setup
## The "%setup"-part of this script is called to bootstrap an empty
## container. It copies the source files from the branch of your
## repository where this file is located into the container to the
## directory "/planner". Do not change this part unless you know
## what you are doing and you are certain that you have to do so.
REPO_ROOT=`dirname $SINGULARITY_BUILDDEF`
cp -r $REPO_ROOT/ $SINGULARITY_ROOTFS/planner
%post
## The "%post"-part of this script is called after the container has
## been created with the "%setup"-part above and runs "inside the
## container". Most importantly, it is used to install dependencies
## and build the planner. Add all commands that have to be executed
## once before the planner runs in this part of the script.
## Install all necessary dependencies.
apt-get update
apt-get -y install cmake g++ make python automake autoconf libtool time gawk flex bison
gcc --version
## Build your planner
cd /planner
./build -j4
%runscript
## The runscript is called whenever the container is used to solve
## an instance.
DOMAINFILE=$1
PROBLEMFILE=$2
PLANFILE=$3
## Call your planner.
/planner/plan $DOMAINFILE $PROBLEMFILE $PLANFILE
## Update the following fields with meta data about your submission.
## Please use the same field names and use only one line for each value.
%labels
Name Symple-1
Description Symple performs a bidirectional A* search based on Edge-Valued Multi-Valued Decision Diagrams (EVMDDs).
Authors David Speck, Florian Geißer and Robert Mattmüller <speckd,geisserf,mattmuel>@informatik.uni-freiburg.de
SupportsDerivedPredicates no
SupportsQuantifiedPreconditions no
SupportsQuantifiedEffects no
#! /bin/sh
#source /opt/centos/devtoolset-1.1/i386/enable
#module load libs/gcc-4.7.2/i386
cd src
./build_all "$@"
cd ..
#!/usr/bin/env python
import errno
import glob
import multiprocessing
import os
import subprocess
import sys
CONFIGS = {}
script_dir = os.path.dirname(__file__)
for config_file in sorted(glob.glob(os.path.join(script_dir, "*build_configs.py"))):
with open(config_file) as f:
config_file_content = f.read()
exec(config_file_content, globals(), CONFIGS)
DEFAULT_CONFIG_NAME = CONFIGS.pop("DEFAULT")
DEBUG_CONFIG_NAME = CONFIGS.pop("DEBUG")
CMAKE = "cmake"
DEFAULT_MAKE_PARAMETERS = []
if os.name == "posix":
MAKE = "make"
try:
num_cpus = multiprocessing.cpu_count()
except NotImplementedError:
pass
else:
DEFAULT_MAKE_PARAMETERS.append('-j{}'.format(num_cpus))
CMAKE_GENERATOR = "Unix Makefiles"
elif os.name == "nt":
MAKE = "nmake"
CMAKE_GENERATOR = "NMake Makefiles"
else:
print("Unsupported OS: " + os.name)
sys.exit(1)
def print_usage():
script_name = os.path.basename(__file__)
configs = []
for name, args in sorted(CONFIGS.items()):
if name == DEFAULT_CONFIG_NAME:
name += " (default)"
if name == DEBUG_CONFIG_NAME:
name += " (default with --debug)"
configs.append(name + "\n " + " ".join(args))
configs_string = "\n ".join(configs)
cmake_name = os.path.basename(CMAKE)
make_name = os.path.basename(MAKE)
generator_name = CMAKE_GENERATOR.lower()
default_config_name = DEFAULT_CONFIG_NAME
debug_config_name = DEBUG_CONFIG_NAME
print("""Usage: {script_name} [BUILD [BUILD ...]] [--all] [--debug] [MAKE_OPTIONS]
Build one or more predefined build configurations of Fast Downward. Each build
uses {cmake_name} to generate {generator_name} and then uses {make_name} to compile the
code. Build configurations differ in the parameters they pass to {cmake_name}.
By default, the build uses N threads on a machine with N cores if the number of
cores can be determined. Use the "-j" option for {cmake_name} to override this default
behaviour.
Build configurations
{configs_string}
--all Alias to build all build configurations.
--debug Alias to build the default debug build configuration.
--help Print this message and exit.
Make options
All other parameters are forwarded to {make_name}.
Example usage:
./{script_name} # build {default_config_name} in #cores threads
./{script_name} -j4 # build {default_config_name} in 4 threads
./{script_name} debug32 # build debug32
./{script_name} --debug # build {debug_config_name}
./{script_name} release64 debug64 # build both 64-bit build configs
./{script_name} --all VERBOSE=true # build all build configs with detailed logs
""".format(**locals()))
def get_project_root_path():
import __main__
return os.path.dirname(__main__.__file__)
def get_builds_path():
return os.path.join(get_project_root_path(), "builds")
def get_src_path():
return os.path.join(get_project_root_path(), "src")
def get_build_path(config_name):
return os.path.join(get_builds_path(), config_name)
def try_run(cmd, cwd):
print('Executing command "{}" in directory "{}".'.format(" ".join(cmd), cwd))
try:
subprocess.check_call(cmd, cwd=cwd)
except OSError as exc:
if exc.errno == errno.ENOENT:
print("Could not find '%s' on your PATH. For installation instructions, "
"see http://www.fast-downward.org/ObtainingAndRunningFastDownward." %
cmd[0])
sys.exit(1)
else:
raise
def build(config_name, cmake_parameters, make_parameters):
print("Building configuration {config_name}.".format(**locals()))
build_path = get_build_path(config_name)
rel_src_path = os.path.relpath(get_src_path(), build_path)
try:
os.makedirs(build_path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(build_path):
pass
else:
raise
try_run([CMAKE, "-G", CMAKE_GENERATOR] + cmake_parameters + [rel_src_path],
cwd=build_path)
try_run([MAKE] + make_parameters, cwd=build_path)
print("Built configuration {config_name} successfully.".format(**locals()))
def main():
config_names = set()
make_parameters = DEFAULT_MAKE_PARAMETERS
for arg in sys.argv[1:]:
if arg == "--help" or arg == "-h":
print_usage()
sys.exit(0)
elif arg == "--debug":
config_names.add(DEBUG_CONFIG_NAME)
elif arg == "--all":
config_names |= set(CONFIGS.keys())
elif arg in CONFIGS:
config_names.add(arg)
else:
make_parameters.append(arg)
if not config_names:
config_names.add(DEFAULT_CONFIG_NAME)
for config_name in config_names:
build(config_name, CONFIGS[config_name], make_parameters)
if __name__ == "__main__":
main()
release32 = ["-DCMAKE_BUILD_TYPE=Release"]
debug32 = ["-DCMAKE_BUILD_TYPE=Debug"]
release32nolp = ["-DCMAKE_BUILD_TYPE=Release", "-DUSE_LP=NO"]
debug32nolp = ["-DCMAKE_BUILD_TYPE=Debug", "-DUSE_LP=NO"]
release64 = ["-DCMAKE_BUILD_TYPE=Release", "-DALLOW_64_BIT=True", "-DCMAKE_CXX_FLAGS='-m64'"]
debug64 = ["-DCMAKE_BUILD_TYPE=Debug", "-DALLOW_64_BIT=True", "-DCMAKE_CXX_FLAGS='-m64'"]
release64nolp = ["-DCMAKE_BUILD_TYPE=Release", "-DALLOW_64_BIT=True", "-DCMAKE_CXX_FLAGS='-m64'", "-DUSE_LP=NO"]
debug64nolp = ["-DCMAKE_BUILD_TYPE=Debug", "-DALLOW_64_BIT=True", "-DCMAKE_CXX_FLAGS='-m64'", "-DUSE_LP=NO"]
minimal = ["-DCMAKE_BUILD_TYPE=Release", "-DDISABLE_PLUGINS_BY_DEFAULT=YES"]
release32dynamic = ["-DCMAKE_BUILD_TYPE=Release", "-DFORCE_DYNAMIC_BUILD=YES"]
debug32dynamic = ["-DCMAKE_BUILD_TYPE=Debug", "-DFORCE_DYNAMIC_BUILD=YES"]
release64dynamic = ["-DCMAKE_BUILD_TYPE=Release", "-DALLOW_64_BIT=True", "-DCMAKE_CXX_FLAGS='-m64'", "-DFORCE_DYNAMIC_BUILD=YES"]
debug64dynamic = ["-DCMAKE_BUILD_TYPE=Debug", "-DALLOW_64_BIT=True", "-DCMAKE_CXX_FLAGS='-m64'", "-DFORCE_DYNAMIC_BUILD=YES"]
DEFAULT = "release32"
DEBUG = "debug32"
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
from __future__ import print_function
import os
from .util import DRIVER_DIR
PORTFOLIO_DIR = os.path.join(DRIVER_DIR, "portfolios")
ALIASES = {}
ALIASES["seq-sat-fd-autotune-1"] = [
"--heuristic", "hff=ff(transform=adapt_costs(one))",
"--heuristic", "hcea=cea()",
"--heuristic", "hcg=cg(transform=adapt_costs(plusone))",
"--heuristic", "hgc=goalcount()",
"--heuristic", "hAdd=add()",
"--search", """iterated([
lazy(alt([single(sum([g(),weight(hff,10)])),
single(sum([g(),weight(hff,10)]),pref_only=true)],
boost=2000),
preferred=[hff],reopen_closed=false,cost_type=one),
lazy(alt([single(sum([g(),weight(hAdd,7)])),
single(sum([g(),weight(hAdd,7)]),pref_only=true),
single(sum([g(),weight(hcg,7)])),
single(sum([g(),weight(hcg,7)]),pref_only=true),
single(sum([g(),weight(hcea,7)])),
single(sum([g(),weight(hcea,7)]),pref_only=true),
single(sum([g(),weight(hgc,7)])),
single(sum([g(),weight(hgc,7)]),pref_only=true)],
boost=1000),
preferred=[hcea,hgc],reopen_closed=false,cost_type=one),
lazy(alt([tiebreaking([sum([g(),weight(hAdd,3)]),hAdd]),
tiebreaking([sum([g(),weight(hAdd,3)]),hAdd],pref_only=true),
tiebreaking([sum([g(),weight(hcg,3)]),hcg]),
tiebreaking([sum([g(),weight(hcg,3)]),hcg],pref_only=true),
tiebreaking([sum([g(),weight(hcea,3)]),hcea]),
tiebreaking([sum([g(),weight(hcea,3)]),hcea],pref_only=true),
tiebreaking([sum([g(),weight(hgc,3)]),hgc]),
tiebreaking([sum([g(),weight(hgc,3)]),hgc],pref_only=true)],
boost=5000),
preferred=[hcea,hgc],reopen_closed=false,cost_type=normal),
eager(alt([tiebreaking([sum([g(),weight(hAdd,10)]),hAdd]),
tiebreaking([sum([g(),weight(hAdd,10)]),hAdd],pref_only=true),
tiebreaking([sum([g(),weight(hcg,10)]),hcg]),
tiebreaking([sum([g(),weight(hcg,10)]),hcg],pref_only=true),
tiebreaking([sum([g(),weight(hcea,10)]),hcea]),
tiebreaking([sum([g(),weight(hcea,10)]),hcea],pref_only=true),
tiebreaking([sum([g(),weight(hgc,10)]),hgc]),
tiebreaking([sum([g(),weight(hgc,10)]),hgc],pref_only=true)],
boost=500),
preferred=[hcea,hgc],reopen_closed=true,cost_type=normal)
],repeat_last=true,continue_on_fail=true)"""]
ALIASES["seq-sat-fd-autotune-2"] = [
"--heuristic", "hcea=cea(transform=adapt_costs(plusone))",
"--heuristic", "hcg=cg(transform=adapt_costs(one))",
"--heuristic", "hgc=goalcount(transform=adapt_costs(plusone))",
"--heuristic", "hff=ff()",
"--search", """iterated([
ehc(hcea,preferred=[hcea],preferred_usage=0,cost_type=normal),
lazy(alt([single(sum([weight(g(),2),weight(hff,3)])),
single(sum([weight(g(),2),weight(hff,3)]),pref_only=true),
single(sum([weight(g(),2),weight(hcg,3)])),
single(sum([weight(g(),2),weight(hcg,3)]),pref_only=true),
single(sum([weight(g(),2),weight(hcea,3)])),
single(sum([weight(g(),2),weight(hcea,3)]),pref_only=true),
single(sum([weight(g(),2),weight(hgc,3)])),
single(sum([weight(g(),2),weight(hgc,3)]),pref_only=true)],
boost=200),
preferred=[hcea,hgc],reopen_closed=false,cost_type=one),
lazy(alt([single(sum([g(),weight(hff,5)])),
single(sum([g(),weight(hff,5)]),pref_only=true),
single(sum([g(),weight(hcg,5)])),
single(sum([g(),weight(hcg,5)]),pref_only=true),
single(sum([g(),weight(hcea,5)])),
single(sum([g(),weight(hcea,5)]),pref_only=true),
single(sum([g(),weight(hgc,5)])),
single(sum([g(),weight(hgc,5)]),pref_only=true)],
boost=5000),
preferred=[hcea,hgc],reopen_closed=true,cost_type=normal),
lazy(alt([single(sum([g(),weight(hff,2)])),
single(sum([g(),weight(hff,2)]),pref_only=true),
single(sum([g(),weight(hcg,2)])),
single(sum([g(),weight(hcg,2)]),pref_only=true),
single(sum([g(),weight(hcea,2)])),
single(sum([g(),weight(hcea,2)]),pref_only=true),
single(sum([g(),weight(hgc,2)])),
single(sum([g(),weight(hgc,2)]),pref_only=true)],
boost=1000),
preferred=[hcea,hgc],reopen_closed=true,cost_type=one)
],repeat_last=true,continue_on_fail=true)"""]
ALIASES["seq-sat-lama-2011"] = [
"--if-unit-cost",
"--heuristic",
"hlm=lama_synergy(lm_rhw(reasonable_orders=true))",
"--heuristic", "hff=ff_synergy(hlm)",
"--search", """iterated([
lazy_greedy([hff,hlm],preferred=[hff,hlm]),
lazy_wastar([hff,hlm],preferred=[hff,hlm],w=5),
lazy_wastar([hff,hlm],preferred=[hff,hlm],w=3),
lazy_wastar([hff,hlm],preferred=[hff,hlm],w=2),
lazy_wastar([hff,hlm],preferred=[hff,hlm],w=1)
],repeat_last=true,continue_on_fail=true)""",
"--if-non-unit-cost",
"--heuristic",
"hlm1=lama_synergy(lm_rhw(reasonable_orders=true,"
" lm_cost_type=one),transform=adapt_costs(one))",
"--heuristic", "hff1=ff_synergy(hlm1)",
"--heuristic",
"hlm2=lama_synergy(lm_rhw(reasonable_orders=true,"
" lm_cost_type=plusone),transform=adapt_costs(plusone))",
"--heuristic", "hff2=ff_synergy(hlm2)",
"--search", """iterated([
lazy_greedy([hff1,hlm1],preferred=[hff1,hlm1],
cost_type=one,reopen_closed=false),
lazy_greedy([hff2,hlm2],preferred=[hff2,hlm2],
reopen_closed=false),
lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=5),
lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=3),
lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=2),
lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=1)