Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
127 commits
Select commit Hold shift + click to select a range
509d93c
benchmarks -> _benchmarks
ericsnowcurrently Jun 18, 2021
e16141d
Use a new API for pyperformance.benchmarks.
ericsnowcurrently Jun 18, 2021
d2101b2
Deal with benchmark objects instead of names.
ericsnowcurrently Jun 21, 2021
20b2c23
Refactor select_benchmarks().
ericsnowcurrently Jun 21, 2021
20e8492
Add and use the default manifest file.
ericsnowcurrently Jun 21, 2021
4f74cb8
Move run_perf_script() back.
ericsnowcurrently Jun 22, 2021
3917d22
Clean up benchmark/__init__.py.
ericsnowcurrently Jun 22, 2021
32e15ec
Make the utils a package.
ericsnowcurrently Jun 22, 2021
b041e2e
Move each of the default benchmarks into its own directory.
ericsnowcurrently Jun 23, 2021
d79eba5
Make BenchmarkSpec.metafile as "secondary" attribute.
ericsnowcurrently Jun 23, 2021
f30421d
Fix benchmark selection.
ericsnowcurrently Jun 23, 2021
043836b
Fix the default benchmarks selection.
ericsnowcurrently Jun 24, 2021
6ba9603
Fix the run script filename.
ericsnowcurrently Jun 24, 2021
0f15e8e
Run benchmarks from the metadata instead of hard-coded.
ericsnowcurrently Jun 24, 2021
91c27f9
Fix the requirements.
ericsnowcurrently Jun 24, 2021
a4f97ad
Pass "name" through to parse_pyproject_toml().
ericsnowcurrently Jun 25, 2021
6d385bf
Leave a note about classifiers.
ericsnowcurrently Jun 25, 2021
5308b29
Drop an unused file.
ericsnowcurrently Jun 26, 2021
4c8a18f
Load manifest and select benchmarks before running the command.
ericsnowcurrently Jun 28, 2021
0f63a54
Fix Benchmark.__repr__().
ericsnowcurrently Jun 28, 2021
c7a92f4
Fix a default arg in load_metadata().
ericsnowcurrently Jun 28, 2021
c45ece7
Fix the packaging data.
ericsnowcurrently Jun 29, 2021
e940e24
Support per-benchmark venvs in VirtualEnvironment.
ericsnowcurrently Jun 29, 2021
0420e96
Ignore pyproject.toml name only if provided.
ericsnowcurrently Jun 30, 2021
6bce386
Add requirements lock files to the benchmarks.
ericsnowcurrently Jun 30, 2021
29cf228
Make a venv for each benchmark instead of sharing one.
ericsnowcurrently Jun 30, 2021
6cadadc
Support "libsdir" in metadata.
ericsnowcurrently Jul 20, 2021
3fb5187
Fix an error message.
ericsnowcurrently Jul 20, 2021
abf56e9
Use the default resolve() if the default manifest is explicit.
ericsnowcurrently Jul 20, 2021
4bf223a
Merge in the version properly.
ericsnowcurrently Jul 20, 2021
9acbbe4
Preserve PYTHONPATH (with libsdir) when invoking pyperf via benchmark…
ericsnowcurrently Jul 20, 2021
d06aa7b
Add a note about using an upstream lib for parsing pyproject.toml.
ericsnowcurrently Jul 20, 2021
e48d29b
Use the full benchmark version rather than the canonicalized form.
ericsnowcurrently Jul 20, 2021
31f6021
Add iter_clean_lines() to _utils.
ericsnowcurrently Jul 22, 2021
4b13645
Use a run ID when running benchmarks.
ericsnowcurrently Jul 22, 2021
b045e9a
Finish implementing "pre" and "post" script support.
ericsnowcurrently Jul 22, 2021
54b8ba0
Drop "pre" and "post" script support. (It isn't necessary.)
ericsnowcurrently Jul 22, 2021
362bb6e
Use the correct venv name for each benchmark.
ericsnowcurrently Jul 22, 2021
e4fc5d4
Stop supporting "libsdir" in the benchmark metadata.
ericsnowcurrently Jul 22, 2021
9816103
Clean up run_perf_script() and related code.
ericsnowcurrently Jul 22, 2021
b7f90f3
Add a missing import.
ericsnowcurrently Sep 30, 2021
7d42a40
Drop accidental files.
ericsnowcurrently Oct 4, 2021
bb599e9
Install all dependencies when running tests.
ericsnowcurrently Oct 4, 2021
df30711
Temporarily get tests passing on Windows.
ericsnowcurrently Oct 4, 2021
8d563f9
Ignore stdlib_dir mismatch for now.
ericsnowcurrently Oct 4, 2021
d05b07c
Move requirements.txt into the data dir.
ericsnowcurrently Nov 2, 2021
701c910
Move the benchmarks to the data dir.
ericsnowcurrently Nov 2, 2021
afafda6
Move _pythoninfo out of the _utils dir.
ericsnowcurrently Nov 2, 2021
cf4679a
Move _pyproject_toml out of the _utils dir.
ericsnowcurrently Nov 2, 2021
d3bdaad
Make _utils a single module.
ericsnowcurrently Nov 2, 2021
8f0b3e3
Move benchmarks.* up to the top level.
ericsnowcurrently Nov 2, 2021
3ce65a6
Move benchmark.* up to the top level.
ericsnowcurrently Nov 2, 2021
3479e27
Clean up the metadata files.
ericsnowcurrently Nov 2, 2021
35e0c7a
Do not import _manifest in itself.
ericsnowcurrently Nov 2, 2021
0fe268a
Drop version and origin from the manifest.
ericsnowcurrently Nov 2, 2021
698dec9
Add a project-level symlink to the default benchmarks.
ericsnowcurrently Nov 2, 2021
f3c6c4b
Names starting with a digit.
ericsnowcurrently Nov 2, 2021
0987842
Drop the base metadata file.
ericsnowcurrently Nov 2, 2021
2c99d0f
All extra project fields even if there is a metabase.
ericsnowcurrently Nov 2, 2021
121a281
Fix a typo.
ericsnowcurrently Nov 2, 2021
0f9d202
Allow specifying the supported groups in the manifest.
ericsnowcurrently Nov 2, 2021
3d19edf
Add tags to all the benchmarks.
ericsnowcurrently Nov 2, 2021
7270c76
Use the tags to get the groups in the default manifest.
ericsnowcurrently Nov 3, 2021
efeb77e
Show the default group before the others.
ericsnowcurrently Nov 3, 2021
3b3de1d
Drop an outdated comment.
ericsnowcurrently Nov 3, 2021
8ceb990
Allow excluding benchmarks in a group.
ericsnowcurrently Nov 3, 2021
0f10859
Finish _init_metadata().
ericsnowcurrently Nov 3, 2021
9e3124d
metabase -> inherits
ericsnowcurrently Nov 3, 2021
62f7c23
Document the manifest and benchmark formats.
ericsnowcurrently Nov 3, 2021
05bb86f
Fix some typos.
ericsnowcurrently Nov 6, 2021
318720f
Print some diagnostic info on error.
ericsnowcurrently Nov 6, 2021
b51042b
Fall back to metadata for version.
ericsnowcurrently Nov 6, 2021
5eeae8e
Add benchmarks to the default group instead of names.
ericsnowcurrently Nov 6, 2021
0dc395a
Install the requirements, even if the venv already exists.
ericsnowcurrently Nov 6, 2021
6fb7403
Only re-install reqs for benchmark venvs.
ericsnowcurrently Nov 8, 2021
4979d5b
Support an "includes" section in the manifest.
ericsnowcurrently Nov 9, 2021
7f7b571
doc fixes
ericsnowcurrently Nov 9, 2021
0addec2
"all" and "default" are always valid groups.
ericsnowcurrently Nov 9, 2021
57e7070
Do not import pyperformance._manifest unless already installed.
ericsnowcurrently Nov 9, 2021
c55d690
Ensure we run in a venv when needed.
ericsnowcurrently Nov 9, 2021
01697d5
Do not re-install the shared venv.
ericsnowcurrently Nov 9, 2021
94ac28a
Only list the "all" and "default" groups once.
ericsnowcurrently Nov 9, 2021
f4b09bb
Add the manifest to the "compile" config.
ericsnowcurrently Nov 9, 2021
b56b25a
Adjust the stdlib_dir check.
ericsnowcurrently Nov 9, 2021
f8338ae
Be sure to set base_executable.
ericsnowcurrently Nov 9, 2021
dd4597b
Use the --venv opt to the "compile" command.
ericsnowcurrently Nov 15, 2021
e22eeb8
Separate the logic for create vs. recreate.
ericsnowcurrently Nov 15, 2021
6bb3a9e
Do not re-create the venv if already running in it.
ericsnowcurrently Nov 15, 2021
9c664ff
Ensure all requirements are always isntalled.
ericsnowcurrently Nov 15, 2021
91c09d3
Do not buffer stdout during tests.
ericsnowcurrently Nov 15, 2021
7a33956
Fix a check.
ericsnowcurrently Nov 15, 2021
62e2014
Do not buffer stdout during tests.
ericsnowcurrently Nov 15, 2021
d7ea256
Always switch to a venv if running out of the repo.
ericsnowcurrently Nov 15, 2021
8ed6fd5
Distinguish message from runtests.py.
ericsnowcurrently Nov 15, 2021
ce6d09e
Pass values into VirtualEnvironment instead of the options object.
ericsnowcurrently Nov 15, 2021
4048199
Do not buffer stdout during tests.
ericsnowcurrently Nov 15, 2021
df8cccf
Distinguish message from runtests.py.
ericsnowcurrently Nov 15, 2021
8719636
Print out the --venv option.
ericsnowcurrently Nov 15, 2021
3c5d7da
Print out the --venv option.
ericsnowcurrently Nov 15, 2021
14761b2
Print out the --venv option.
ericsnowcurrently Nov 15, 2021
b975c51
Do not add args directly to the "venv" command.
ericsnowcurrently Nov 16, 2021
062745b
Be explicit about "create".
ericsnowcurrently Nov 16, 2021
d918429
Drop debug messages.
ericsnowcurrently Nov 16, 2021
5842d48
Resolve the manifest file in the compile config.
ericsnowcurrently Nov 16, 2021
f1f9db1
Add a "dryrun" mode for testing "compile".
ericsnowcurrently Nov 16, 2021
35166b7
Add BenchmarkManifest.show().
ericsnowcurrently Nov 16, 2021
561d271
Use --manifest and --benchmarks when creating venv for "compile".
ericsnowcurrently Nov 16, 2021
6831508
Add the resolve_file() util.
ericsnowcurrently Nov 16, 2021
bb88341
Resolve the manifest file in includes.
ericsnowcurrently Nov 16, 2021
1e51f8e
Default BenchmarkRevision._dryrun to False.
ericsnowcurrently Nov 16, 2021
ef231fe
Set the default for --benchmarks manually.
ericsnowcurrently Nov 16, 2021
648bd18
Allow the "venv" command to not install benchmark requirements.
ericsnowcurrently Nov 16, 2021
7bb8d94
Use <NONE> as a marker for "no benchmarks".
ericsnowcurrently Nov 16, 2021
15fd559
Require --benchmarks (or default) for some commands.
ericsnowcurrently Nov 16, 2021
f041503
Do not always install the first benchmark venv.
ericsnowcurrently Nov 16, 2021
452b541
Separate creating venv from installing requirements.
ericsnowcurrently Nov 16, 2021
b049a4a
Only install per-benchmark requirements when running them.
ericsnowcurrently Nov 16, 2021
a083cd5
Print the benchmark number.
ericsnowcurrently Nov 16, 2021
7b18753
Factor out Python.resolve_program().
ericsnowcurrently Nov 17, 2021
c8a7789
Do not pass --benchmarks when creating venv for "compile".
ericsnowcurrently Nov 17, 2021
6f6df4d
Skip a benchmark if its requirements could not be installed.
ericsnowcurrently Nov 17, 2021
b379536
Set Python.program to None if the resolved path does not exist.
ericsnowcurrently Nov 17, 2021
686a96d
Factor out resolve_python().
ericsnowcurrently Nov 17, 2021
664a909
Add a blank line.
ericsnowcurrently Nov 17, 2021
9899813
Do not print a traceback for skipped benchmarks.
ericsnowcurrently Nov 17, 2021
da1b6c3
Fix a typo.
ericsnowcurrently Nov 17, 2021
09ffb6b
Merge main.
ericsnowcurrently Dec 7, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fix benchmark selection.
  • Loading branch information
ericsnowcurrently committed Oct 4, 2021
commit f30421d9fced159db8499a26c8c1f51048f24d14
6 changes: 6 additions & 0 deletions pyperformance/_utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@
from ._fs import (
temporary_file,
)
from ._misc import (
check_name,
parse_name_pattern,
parse_tag_pattern,
parse_selections,
)
from ._platform import (
MS_WINDOWS,
)
51 changes: 51 additions & 0 deletions pyperformance/_utils/_misc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@

def check_name(name, *, loose=False):
if not name or not isinstance(name, str):
raise ValueError(f'bad name {name!r}')
if not loose:
if name.startswith('-'):
raise ValueError(name)
if not name.replace('-', '_').isidentifier():
raise ValueError(name)


def parse_name_pattern(text, *, fail=True):
name = text
# XXX Support globs and/or regexes? (return a callable)
try:
check_name('_' + name)
except Exception:
if fail:
raise # re-raise
return None
return name


def parse_tag_pattern(text):
if not text.startswith('<'):
return None
if not text.endswith('>'):
return None
tag = text[1:-1]
# XXX Support globs and/or regexes? (return a callable)
check_name(tag)
return tag


def parse_selections(selections, parse_entry=None):
if isinstance(selections, str):
selections = selections.split(',')
if parse_entry is None:
parse_entry = (lambda o, e: (o, e, None, e))

for entry in selections:
entry = entry.strip()
if not entry:
continue

op = '+'
if entry.startswith('-'):
op = '-'
entry = entry[1:]

yield parse_entry(op, entry)
2 changes: 1 addition & 1 deletion pyperformance/benchmark/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@

# aliases
from ._spec import BenchmarkSpec, parse_benchmark
from ._spec import BenchmarkSpec, parse_benchmark, check_name
from ._benchmark import Benchmark
16 changes: 11 additions & 5 deletions pyperformance/benchmark/_benchmark.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,17 @@
from ._spec import parse_benchmark
from ._spec import BenchmarkSpec


class Benchmark:

def __init__(self, spec, run):
if isinstance(spec, str):
spec = parse_benchmark(spec)
def __init__(self, spec, metafile):
spec, _metafile = BenchmarkSpec.from_raw(spec)
if not metafile:
if not _metafile:
raise ValueError(f'missing metafile for {spec!r}')
metafile = _metafile

self.spec = spec
self.run = run
self.metafile = metafile

def __repr__(self):
return f'{type(self).__name__}(spec={self.spec}, run={self.run})'
Expand All @@ -32,3 +35,6 @@ def __gt__(self, other):
except AttributeError:
return NotImplemented
return self.spec > other_spec

def run(self, *args):
return self._func(*args)
26 changes: 18 additions & 8 deletions pyperformance/benchmark/_spec.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
from collections import namedtuple

from .. import _utils

class BenchmarkSpec(namedtuple('BenchmarkSpec', 'name version origin')):

metafile = None

def __new__(cls, name, version=None, origin=None, metafile=None):
self = super().__new__(cls, name, version, origin)
self.metafile = metafile
return self
def check_name(name):
_utils.check_name('_' + name)


def parse_benchmark(entry):
Expand All @@ -18,4 +14,18 @@ def parse_benchmark(entry):
metafile = None
if not f'_{name}'.isidentifier():
raise ValueError(f'unsupported benchmark name in {entry!r}')
return BenchmarkSpec(name, version, origin, metafile)
bench = BenchmarkSpec(name, version, origin)
return bench, metafile


class BenchmarkSpec(namedtuple('BenchmarkSpec', 'name version origin')):
__slots__ = ()

@classmethod
def from_raw(cls, raw):
if isinstance(raw, BenchmarkSpec):
return raw, None
elif isinstance(raw, str):
return parse_benchmark(raw)
else:
raise ValueError(f'unsupported raw spec {raw!r}')
73 changes: 20 additions & 53 deletions pyperformance/benchmarks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@
from .. import _benchmarks, benchmark as _benchmark
from . import _manifest

# an alias (but also used here)
from ._parse import parse_benchmarks
# aliases
from ._manifest import expand_benchmark_groups
from ._selections import parse_selection, iter_selections


DEFAULTS_DIR = os.path.join(
Expand All @@ -19,14 +20,23 @@ def load_manifest(filename, *, resolve=None):
filename = DEFAULT_MANIFEST
if resolve is None:
def resolve(bench):
if not bench.version:
bench = bench._replace(version=__version__)
if not bench.origin:
bench = bench._replace(origin='<default>')
if isinstance(bench, _benchmark.Benchmark):
spec = bench.spec
else:
spec = bench
bench = _benchmark.Benchmark(spec, '<bogus>')
bench.metafile = None

if not spec.version:
spec = spec._replace(version=__version__)
if not spec.origin:
spec = spec._replace(origin='<default>')
bench.spec = spec

if not bench.metafile:
metafile = os.path.join(DEFAULTS_DIR,
f'bm_{bench.name}',
'METADATA')
'pyproject.toml')
#bench = bench._replace(metafile=metafile)
bench.metafile = metafile
return bench
Expand All @@ -37,53 +47,10 @@ def resolve(bench):
def iter_benchmarks(manifest):
# XXX Use the benchmark's "run" script.
funcs, _ = _benchmarks.get_benchmarks()
for spec in manifest.benchmarks:
func = funcs[spec.name]
yield _benchmark.Benchmark(spec, func)
for bench in manifest.benchmarks:
bench._func = funcs[bench.name]
yield bench


def get_benchmarks(manifest):
return list(iter_benchmarks(manifest))


def get_benchmark_groups(manifest):
return dict(manifest.groups)


def expand_benchmark_groups(parsed, groups):
if isinstance(parsed, str):
parsed = _benchmark.parse_benchmark(parsed)

if not groups:
yield parsed
elif parsed.name not in groups:
yield parsed
else:
benchmarks = groups[parsed.name]
for bench in benchmarks or ():
yield from expand_benchmark_groups(bench, groups)


def select_benchmarks(raw, manifest, *,
expand=None,
known=None,
):
if expand is None:
groups = get_benchmark_groups(manifest)
expand = lambda n: expand_benchmark_groups(n, groups)
if known is None:
known = get_benchmarks(manifest)
benchmarks = {b.spec: b for b in get_benchmarks(manifest)}

included, excluded = parse_benchmarks(raw, expand=expand, known=known)
if not included:
included = set(expand('default', 'add'))

selected = set()
for spec in included:
bench = benchmarks[spec]
selected.add(bench)
for spec in excluded:
bench = benchmarks[spec]
selected.remove(bench)
return selected
104 changes: 55 additions & 49 deletions pyperformance/benchmarks/_manifest.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from collections import namedtuple

from .. import benchmark as _benchmark
from .. import benchmark as _benchmark, _utils


BENCH_COLUMNS = ('name', 'version', 'origin', 'metafile')
Expand All @@ -26,9 +26,33 @@ def parse_manifest(text, *, resolve=None):
elif section.startswith('group '):
_, _, group = section.partition(' ')
groups[group] = _parse_group(group, seclines, benchmarks)
_check_groups(groups)
# XXX Update tags for each benchmark with member groups.
return BenchmarksManifest(benchmarks, groups)


def expand_benchmark_groups(bench, groups):
if isinstance(bench, str):
spec, metafile = _benchmark.parse_benchmark(bench)
if metafile:
bench = _benchmark.Benchmark(spec, metafile)
else:
bench = spec
elif isinstance(bench, _benchmark.Benchmark):
spec = bench.spec
else:
spec = bench

if not groups:
yield bench
elif bench.name not in groups:
yield bench
else:
benchmarks = groups[bench.name]
for bench in benchmarks or ():
yield from expand_benchmark_groups(bench, groups)


def _iter_sections(lines):
lines = (line.split('#')[0].strip()
for line in lines)
Expand Down Expand Up @@ -63,66 +87,48 @@ def _parse_benchmarks(lines, resolve):
benchmarks = []
for line in lines:
try:
name, version, origin, metafile = line.split('\t')
name, version, origin, metafile = (None if l == '-' else l
for l in line.split('\t'))
except ValueError:
raise ValueError(f'bad benchmark line {line!r}')
if not version or version == '-':
version = None
if not origin or origin == '-':
origin = None
if not metafile or metafile == '-':
metafile = None
bench = _benchmark.BenchmarkSpec(name, version, origin, metafile)
spec = _benchmark.BenchmarkSpec(name or None,
version or None,
origin or None,
)
if metafile:
bench = _benchmark.Benchmark(spec, metafile)
else:
bench = spec
if resolve is not None:
bench = resolve(bench)
benchmarks.append(bench)
return benchmarks


def _parse_group(name, lines, benchmarks):
benchmarks = set(benchmarks)
byname = {b.name: b for b in benchmarks}
if name in byname:
raise ValueError(f'a group and a benchmark have the same name ({name})')

group = []
seen = set()
for line in lines:
bench = _benchmark.parse_benchmark(line)
if bench not in benchmarks:
try:
bench = byname[bench.name]
except KeyError:
raise ValueError(f'unknown benchmark {bench.name!r} ({name})')
group.append(bench)
benchname = line
_benchmark.check_name(benchname)
if benchname in seen:
continue
if benchname in byname:
group.append(byname[benchname])
else:
# It may be a group. We check later.
group.append(benchname)
return group


#def render_manifest(manifest):
# if isinstance(manifest, str):
# raise NotImplementedError
# manifest = manifest.splitlines()
# yield BENCH_HEADER
# for row in manifest:
# if isinstance(row, str):
# row = _parse_manifest_row(row)
# if isinstance(row, str):
# yield row
# continue
# line _render_manifest_row(row)
#
# raise NotImplementedError
#
#
#def parse_group_manifest(text):
# ...
#
#
#def render_group_manifest(group, benchmarks):
# # (manifest file, bm name)
# ...
#
#
#def parse_bench_from_manifest(line):
# raise NotImplementedError
#
#
#def render_bench_for_manifest(benchmark, columns):
# raise NotImplementedError
# name, origin, version, metafile = info
def _check_groups(groups):
for group, benchmarks in groups.items():
for bench in benchmarks:
if not isinstance(bench, str):
continue
elif bench not in groups:
raise ValueError(f'unknown benchmark {name!r} (in group {group!r})')
Loading