refactored the testing system, fixed fput and removed arr_asm.sl
This commit is contained in:
14
SPEC.md
14
SPEC.md
@@ -12,14 +12,14 @@ This document reflects the implementation that ships in this repository today (`
|
||||
- **Driver (`main.py`)** – Supports `python main.py source.sl -o a.out`, `--emit-asm`, `--run`, `--dbg`, `--repl`, `--temp-dir`, `--clean`, repeated `-I/--include` paths, and repeated `-l` linker flags (either `-lfoo` or `-l libc.so.6`). Unknown `-l` flags are collected and forwarded to the linker.
|
||||
- **REPL** – `--repl` launches a stateful session with commands such as `:help`, `:reset`, `:load`, `:call <word>`, `:edit`, and `:show`. The REPL still emits/links entire programs for each run; it simply manages the session source for you.
|
||||
- **Imports** – `import relative/or/absolute/path.sl` inserts the referenced file textually. Resolution order: (1) absolute path, (2) relative to the importing file, (3) each include path (defaults: project root and `./stdlib`). Each file is included at most once per compilation unit. Import lines leave blank placeholders so error spans stay meaningful.
|
||||
- **Workspace** – `stdlib/` holds library modules, `tests/` contains executable samples with `.expected` outputs, and top-level `.sl` files (e.g., `fn.sl`, `nob.sl`) exercise advanced features.
|
||||
- **Workspace** – `stdlib/` holds library modules, `tests/` contains executable samples with `.expected` outputs, `extra_tests/` houses standalone integration demos, and `libs/` collects opt-in extensions such as `libs/fn.sl` and `libs/nob.sl`.
|
||||
|
||||
## 3. Lexical Structure
|
||||
- **Reader** – Whitespace-delimited; `#` starts a line comment. String literals honor `\"`, `\\`, `\n`, `\r`, `\t`, and `\0`. Numbers default to signed 64-bit integers via `int(token, 0)` (so `0x`, `0o`, `0b` all work). Tokens containing `.` or `e` parse as floats.
|
||||
- **Identifiers** – `[A-Za-z_][A-Za-z0-9_]*`. Everything else is treated as punctuation or literal.
|
||||
- **String representation** – At runtime each literal pushes `(addr len)` with the length on top. The assembler stores literals in `section .data` with a trailing `NULL` for convenience.
|
||||
- **Lists** – `[` begins a list literal, `]` ends it. The compiler captures the intervening stack segment into a freshly `mmap`'d buffer that stores `(len followed by qword items)`, drops the captured values, and pushes the buffer address. Users must `munmap` the buffer when done.
|
||||
- **Token customization** – Immediate words can call `add-token` or `add-token-chars` to teach the reader about new multi-character tokens. `fn.sl` uses this in combination with token hooks to recognize `foo(1, 2)` syntax.
|
||||
- **Token customization** – Immediate words can call `add-token` or `add-token-chars` to teach the reader about new multi-character tokens. `libs/fn.sl` uses this in combination with token hooks to recognize `foo(1, 2)` syntax.
|
||||
|
||||
### Stack-effect comments
|
||||
- **Location and prefix** – Public words in `stdlib/` (and most user code should) document its stack effect with a line comment directly above the definition: `#word_name …`.
|
||||
@@ -53,13 +53,13 @@ This document reflects the implementation that ships in this repository today (`
|
||||
- **Virtual machine** – Immediate words run inside `CompileTimeVM`, which keeps its own stacks and exposes helpers registered in `bootstrap_dictionary()`:
|
||||
- Lists/maps: `list-new`, `list-append`, `list-pop`, `list-pop-front`, `list-length`, `list-empty?`, `list-get`, `list-set`, `list-extend`, `list-last`, `map-new`, `map-set`, `map-get`, `map-has?`.
|
||||
- Strings/numbers: `string=`, `string-length`, `string-append`, `string>number`, `int>string`.
|
||||
- Lexer utilities: `lexer-new`, `lexer-pop`, `lexer-peek`, `lexer-expect`, `lexer-collect-brace`, `lexer-push-back` (used by `fn.sl` to parse signatures and infix expressions).
|
||||
- Lexer utilities: `lexer-new`, `lexer-pop`, `lexer-peek`, `lexer-expect`, `lexer-collect-brace`, `lexer-push-back` (used by `libs/fn.sl` to parse signatures and infix expressions).
|
||||
- Token management: `next-token`, `peek-token`, `inject-tokens`, `token-lexeme`, `token-from-lexeme`.
|
||||
- Reader hooks: `set-token-hook` installs a word that receives each token (pushed as a `Token` object) and must leave a truthy handled flag; `clear-token-hook` disables it. `fn.sl`'s `extend-syntax` demonstrates rewriting `foo(1, 2)` into ordinary word calls.
|
||||
- Reader hooks: `set-token-hook` installs a word that receives each token (pushed as a `Token` object) and must leave a truthy handled flag; `clear-token-hook` disables it. `libs/fn.sl`'s `extend-syntax` demonstrates rewriting `foo(1, 2)` into ordinary word calls.
|
||||
- Prelude/BSS control: `prelude-clear`, `prelude-append`, `prelude-set`, `bss-clear`, `bss-append`, `bss-set` let user code override the `_start` stub or `.bss` layout.
|
||||
- Definition helpers: `emit-definition` injects a `word ... end` definition on the fly (used by the struct macro). `parse-error` raises a custom diagnostic.
|
||||
- **Text macros** – `macro` is an immediate word implemented in Python; it prevents nesting by tracking active recordings and registers expansion tokens with `$n` substitution.
|
||||
- **Python bridges** – `:py name { ... } ;` executes once during parsing. The body may define `macro(ctx: MacroContext)` (with helpers such as `next_token`, `emit_literal`, `inject_tokens`, `new_label`, and direct `parser` access) and/or `intrinsic(builder: FunctionEmitter)` to emit assembly directly. The `fn` DSL (`fn.sl`) and other syntax layers are ordinary `:py` blocks.
|
||||
- **Python bridges** – `:py name { ... } ;` executes once during parsing. The body may define `macro(ctx: MacroContext)` (with helpers such as `next_token`, `emit_literal`, `inject_tokens`, `new_label`, and direct `parser` access) and/or `intrinsic(builder: FunctionEmitter)` to emit assembly directly. The `fn` DSL (`libs/fn.sl`) and other syntax layers are ordinary `:py` blocks.
|
||||
|
||||
## 7. Foreign Code, Inline Assembly, and Syscalls
|
||||
- **`:asm name { ... } ;`** – Defines a word entirely in NASM syntax. The body is copied verbatim into the output and terminated with `ret`. If `keystone-engine` is installed, `:asm` words also execute at compile time; the VM marshals `(addr len)` string pairs by scanning for `data_start`/`data_end` references.
|
||||
@@ -81,8 +81,10 @@ This document reflects the implementation that ships in this repository today (`
|
||||
- **`stdlib.sl`** – Convenience aggregator that imports `core`, `mem`, `io`, and `utils` so most programs can simply `import stdlib/stdlib.sl`.
|
||||
|
||||
## 9. Testing and Usage Patterns
|
||||
- **Automated coverage** – `tests/*.sl` exercise allocations, dynamic arrays, IO (file/stdin/stdout/stderr), struct accessors, inline words, label/goto, macros, syscall wrappers, fn-style syntax, return-stack locals (`tests/with_variables.sl`), and compile-time overrides. Each test has a `.test` driver command and a `.expected` file to verify output.
|
||||
- **Automated coverage** – `python test.py` compiles every `tests/*.sl`, runs the generated binary, and compares stdout against `<name>.expected`. Optional companions include `<name>.stdin` (piped to the process), `<name>.args` (extra CLI args parsed with `shlex`), `<name>.stderr` (expected stderr), and `<name>.meta.json` (per-test knobs such as `expected_exit`, `expect_compile_error`, or `env`). The `extra_tests/` folder ships with curated demos (`extra_tests/ct_test.sl`, `extra_tests/args.sl`, `extra_tests/c_extern.sl`, `extra_tests/fn_test.sl`, `extra_tests/nob_test.sl`) that run alongside the core suite; pass `--extra path/to/foo.sl` to cover more standalone files. Use `python test.py --list` to see descriptions and `python test.py --update foo` to bless outputs after intentional changes.
|
||||
- **Common commands** –
|
||||
- `python test.py` (run the whole suite)
|
||||
- `python test.py hello --update` (re-bless a single test)
|
||||
- `python main.py tests/hello.sl -o build/hello && ./build/hello`
|
||||
- `python main.py program.sl --emit-asm --temp-dir build`
|
||||
- `python main.py --repl`
|
||||
|
||||
12
c_extern.sl
12
c_extern.sl
@@ -10,12 +10,16 @@ word main
|
||||
# Test C-style extern with implicit ABI handling
|
||||
-10 labs puti cr
|
||||
|
||||
# Basic math
|
||||
1.5 2.5 f+ fputln # Outputs: 4.000000
|
||||
|
||||
# Basic math (scaled to avoid libc printf dependency)
|
||||
1.5 2.5 f+ # 4.0
|
||||
1000000.0 f*
|
||||
float>int puti cr # Prints 4000000 (6 decimal places of 4.0)
|
||||
|
||||
# External math library (libm)
|
||||
10.0 10.0 atan2 # Result is pi/4
|
||||
4.0 f* fputln # Outputs: 3.141593 (approx pi)
|
||||
4.0 f* # Approx pi
|
||||
1000000.0 f*
|
||||
float>int puti cr # Prints scaled pi value
|
||||
|
||||
# Test extern void
|
||||
0 exit
|
||||
|
||||
1
extra_tests/args.args
Normal file
1
extra_tests/args.args
Normal file
@@ -0,0 +1 @@
|
||||
foo bar
|
||||
3
extra_tests/args.expected
Normal file
3
extra_tests/args.expected
Normal file
@@ -0,0 +1,3 @@
|
||||
./build/args
|
||||
foo
|
||||
bar
|
||||
10
extra_tests/args.sl
Normal file
10
extra_tests/args.sl
Normal file
@@ -0,0 +1,10 @@
|
||||
import stdlib/stdlib.sl
|
||||
|
||||
word main
|
||||
0 argc for
|
||||
dup
|
||||
argv@ dup strlen puts
|
||||
1 +
|
||||
end
|
||||
0
|
||||
end
|
||||
3
extra_tests/c_extern.expected
Normal file
3
extra_tests/c_extern.expected
Normal file
@@ -0,0 +1,3 @@
|
||||
10
|
||||
4.000000
|
||||
3.141593
|
||||
4
extra_tests/c_extern.meta.json
Normal file
4
extra_tests/c_extern.meta.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"description": "C-style extern demo against libc/libm",
|
||||
"libs": ["libc.so.6", "m"]
|
||||
}
|
||||
23
extra_tests/c_extern.sl
Normal file
23
extra_tests/c_extern.sl
Normal file
@@ -0,0 +1,23 @@
|
||||
import stdlib.sl
|
||||
import float.sl
|
||||
|
||||
# C-style externs (auto ABI handling)
|
||||
extern long labs(long n)
|
||||
extern void exit(int status)
|
||||
extern double atan2(double y, double x)
|
||||
|
||||
word main
|
||||
# Test C-style extern with implicit ABI handling
|
||||
-10 labs puti cr
|
||||
|
||||
1.5 2.5 f+ # 4.0
|
||||
fputln
|
||||
|
||||
# External math library (libm)
|
||||
10.0 10.0 atan2 # Result is pi/4
|
||||
4.0 f* # Approx pi
|
||||
fputln
|
||||
|
||||
# Test extern void
|
||||
0 exit
|
||||
end
|
||||
2
extra_tests/ct_test.compile.expected
Normal file
2
extra_tests/ct_test.compile.expected
Normal file
@@ -0,0 +1,2 @@
|
||||
hello world
|
||||
[info] built /home/igor/programming/IgorCielniak/l2/build/ct_test
|
||||
1
extra_tests/ct_test.expected
Normal file
1
extra_tests/ct_test.expected
Normal file
@@ -0,0 +1 @@
|
||||
hello world
|
||||
4
extra_tests/ct_test.meta.json
Normal file
4
extra_tests/ct_test.meta.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"description": "compile-time hello world demo",
|
||||
"requires": ["keystone"]
|
||||
}
|
||||
3
extra_tests/fn_test.expected
Normal file
3
extra_tests/fn_test.expected
Normal file
@@ -0,0 +1,3 @@
|
||||
1
|
||||
5
|
||||
3
|
||||
4
extra_tests/fn_test.meta.json
Normal file
4
extra_tests/fn_test.meta.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"description": "fn DSL lowering smoke test",
|
||||
"requires": ["keystone"]
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
import stdlib/stdlib.sl
|
||||
import stdlib/io.sl
|
||||
import fn.sl
|
||||
import libs/fn.sl
|
||||
|
||||
fn foo(int a, int b){
|
||||
1
|
||||
4
extra_tests/nob_test.meta.json
Normal file
4
extra_tests/nob_test.meta.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"description": "shell wrapper demo; compile-only to avoid nondeterministic ls output",
|
||||
"compile_only": true
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
import nob.sl
|
||||
import libs/nob.sl
|
||||
|
||||
word main
|
||||
"ls" sh
|
||||
@@ -1,248 +0,0 @@
|
||||
# Dynamic arrays (qword elements)
|
||||
#
|
||||
# Layout at address `arr`:
|
||||
# [arr + 0] len (qword)
|
||||
# [arr + 8] cap (qword)
|
||||
# [arr + 16] data (qword) = arr + 24
|
||||
# [arr + 24] elements (cap * 8 bytes)
|
||||
#
|
||||
# Allocation: mmap; free: munmap.
|
||||
# Growth: allocate new block, copy elements, munmap old block.
|
||||
|
||||
#arr_new [* | cap] -> [* | arr]
|
||||
:asm arr_new {
|
||||
mov r14, [r12] ; requested cap
|
||||
cmp r14, 1
|
||||
jge .cap_ok
|
||||
mov r14, 1
|
||||
.cap_ok:
|
||||
; bytes = 24 + cap*8
|
||||
mov rsi, r14
|
||||
shl rsi, 3
|
||||
add rsi, 24
|
||||
|
||||
; mmap(NULL, bytes, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1, 0)
|
||||
xor rdi, rdi
|
||||
mov rdx, 3
|
||||
mov r10, 34
|
||||
mov r8, -1
|
||||
xor r9, r9
|
||||
mov rax, 9
|
||||
syscall
|
||||
|
||||
; header
|
||||
mov qword [rax], 0
|
||||
mov [rax + 8], r14
|
||||
lea rbx, [rax + 24]
|
||||
mov [rax + 16], rbx
|
||||
|
||||
; replace cap with arr pointer
|
||||
mov [r12], rax
|
||||
ret
|
||||
}
|
||||
;
|
||||
|
||||
#arr_len [* | arr] -> [* | len]
|
||||
:asm arr_len {
|
||||
mov rax, [r12]
|
||||
mov rax, [rax]
|
||||
mov [r12], rax
|
||||
ret
|
||||
}
|
||||
;
|
||||
|
||||
#arr_cap [* | arr] -> [* | cap]
|
||||
:asm arr_cap {
|
||||
mov rax, [r12]
|
||||
mov rax, [rax + 8]
|
||||
mov [r12], rax
|
||||
ret
|
||||
}
|
||||
;
|
||||
|
||||
#arr_data [* | arr] -> [* | ptr]
|
||||
:asm arr_data {
|
||||
mov rax, [r12]
|
||||
mov rax, [rax + 16]
|
||||
mov [r12], rax
|
||||
ret
|
||||
}
|
||||
;
|
||||
|
||||
#arr_free [* | arr] -> [*]
|
||||
:asm arr_free {
|
||||
mov rbx, [r12] ; base
|
||||
mov rcx, [rbx + 8] ; cap
|
||||
mov rsi, rcx
|
||||
shl rsi, 3
|
||||
add rsi, 24
|
||||
mov rdi, rbx
|
||||
mov rax, 11
|
||||
syscall
|
||||
add r12, 8 ; drop arr
|
||||
ret
|
||||
}
|
||||
;
|
||||
|
||||
#arr_reserve [*, cap | arr] -> [* | arr]
|
||||
# Ensures capacity >= cap; returns (possibly moved) arr pointer.
|
||||
:asm arr_reserve {
|
||||
mov rbx, [r12] ; arr
|
||||
mov r14, [r12 + 8] ; requested cap
|
||||
cmp r14, 1
|
||||
jge .req_ok
|
||||
mov r14, 1
|
||||
.req_ok:
|
||||
mov rdx, [rbx + 8] ; old cap
|
||||
cmp rdx, r14
|
||||
jae .no_change
|
||||
|
||||
; alloc new block: bytes = 24 + reqcap*8
|
||||
mov rsi, r14
|
||||
shl rsi, 3
|
||||
add rsi, 24
|
||||
xor rdi, rdi
|
||||
mov rdx, 3
|
||||
mov r10, 34
|
||||
mov r8, -1
|
||||
xor r9, r9
|
||||
mov rax, 9
|
||||
syscall
|
||||
|
||||
mov r10, rax ; new base
|
||||
lea r9, [r10 + 24] ; new data
|
||||
|
||||
; header
|
||||
mov r8, [rbx] ; len
|
||||
mov [r10], r8
|
||||
mov [r10 + 8], r14
|
||||
mov [r10 + 16], r9
|
||||
|
||||
; copy elements from old data
|
||||
mov r11, [rbx + 16] ; old data
|
||||
xor rcx, rcx ; i
|
||||
.copy_loop:
|
||||
cmp rcx, r8
|
||||
je .copy_done
|
||||
mov rdx, [r11 + rcx*8]
|
||||
mov [r9 + rcx*8], rdx
|
||||
inc rcx
|
||||
jmp .copy_loop
|
||||
.copy_done:
|
||||
|
||||
; munmap old block
|
||||
mov rsi, [rbx + 8]
|
||||
shl rsi, 3
|
||||
add rsi, 24
|
||||
mov rdi, rbx
|
||||
mov rax, 11
|
||||
syscall
|
||||
|
||||
; return new arr only
|
||||
mov [r12 + 8], r10
|
||||
add r12, 8
|
||||
ret
|
||||
|
||||
.no_change:
|
||||
; drop cap, keep arr
|
||||
mov [r12 + 8], rbx
|
||||
add r12, 8
|
||||
ret
|
||||
}
|
||||
;
|
||||
|
||||
#arr_push [*, x | arr] -> [* | arr]
|
||||
:asm arr_push {
|
||||
mov rbx, [r12] ; arr
|
||||
mov rcx, [rbx] ; len
|
||||
mov rdx, [rbx + 8] ; cap
|
||||
cmp rcx, rdx
|
||||
jb .have_space
|
||||
|
||||
; grow: newcap = max(1, cap) * 2
|
||||
mov r14, rdx
|
||||
cmp r14, 1
|
||||
jae .cap_ok
|
||||
mov r14, 1
|
||||
.cap_ok:
|
||||
shl r14, 1
|
||||
|
||||
; alloc new block
|
||||
mov rsi, r14
|
||||
shl rsi, 3
|
||||
add rsi, 24
|
||||
xor rdi, rdi
|
||||
mov rdx, 3
|
||||
mov r10, 34
|
||||
mov r8, -1
|
||||
xor r9, r9
|
||||
mov rax, 9
|
||||
syscall
|
||||
|
||||
mov r10, rax ; new base
|
||||
lea r9, [r10 + 24] ; new data
|
||||
|
||||
; header
|
||||
mov rcx, [rbx] ; len (reload; syscall clobbers rcx)
|
||||
mov [r10], rcx
|
||||
mov [r10 + 8], r14
|
||||
mov [r10 + 16], r9
|
||||
|
||||
; copy old data
|
||||
mov r11, [rbx + 16] ; old data
|
||||
xor r8, r8
|
||||
.push_copy_loop:
|
||||
cmp r8, rcx
|
||||
je .push_copy_done
|
||||
mov rdx, [r11 + r8*8]
|
||||
mov [r9 + r8*8], rdx
|
||||
inc r8
|
||||
jmp .push_copy_loop
|
||||
.push_copy_done:
|
||||
|
||||
; munmap old block
|
||||
mov rsi, [rbx + 8]
|
||||
shl rsi, 3
|
||||
add rsi, 24
|
||||
mov rdi, rbx
|
||||
mov rax, 11
|
||||
syscall
|
||||
|
||||
; switch to new base
|
||||
mov rbx, r10
|
||||
|
||||
.have_space:
|
||||
; store element at data[len]
|
||||
mov r9, [rbx + 16]
|
||||
mov rax, [r12 + 8] ; x
|
||||
mov rcx, [rbx] ; len
|
||||
mov [r9 + rcx*8], rax
|
||||
inc rcx
|
||||
mov [rbx], rcx
|
||||
|
||||
; return arr only
|
||||
mov [r12 + 8], rbx
|
||||
add r12, 8
|
||||
ret
|
||||
}
|
||||
;
|
||||
|
||||
#arr_pop [* | arr] -> [*, arr | x]
|
||||
:asm arr_pop {
|
||||
mov rbx, [r12] ; arr
|
||||
mov rcx, [rbx] ; len
|
||||
test rcx, rcx
|
||||
jz .empty
|
||||
dec rcx
|
||||
mov [rbx], rcx
|
||||
mov rdx, [rbx + 16] ; data
|
||||
mov rax, [rdx + rcx*8]
|
||||
jmp .push
|
||||
.empty:
|
||||
xor rax, rax
|
||||
.push:
|
||||
sub r12, 8
|
||||
mov [r12], rax
|
||||
ret
|
||||
}
|
||||
;
|
||||
@@ -85,11 +85,14 @@
|
||||
|
||||
# Output
|
||||
extern int printf(char* fmt, double x)
|
||||
extern int fflush(void* stream)
|
||||
|
||||
word fput
|
||||
"%f" drop swap printf drop
|
||||
0 fflush drop
|
||||
end
|
||||
|
||||
word fputln
|
||||
"%f\n" drop swap printf drop
|
||||
0 fflush drop
|
||||
end
|
||||
597
test.py
Normal file
597
test.py
Normal file
@@ -0,0 +1,597 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Compiler-focused test runner for the L2 toolchain."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import difflib
|
||||
import fnmatch
|
||||
import importlib.util
|
||||
import json
|
||||
import os
|
||||
import platform
|
||||
import shlex
|
||||
import subprocess
|
||||
import sys
|
||||
import textwrap
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, Iterable, List, Optional, Sequence, Set, Tuple
|
||||
|
||||
DEFAULT_EXTRA_TESTS = [
|
||||
"extra_tests/ct_test.sl",
|
||||
"extra_tests/args.sl",
|
||||
"extra_tests/c_extern.sl",
|
||||
"extra_tests/fn_test.sl",
|
||||
"extra_tests/nob_test.sl",
|
||||
]
|
||||
|
||||
COLORS = {
|
||||
"red": "\033[91m",
|
||||
"green": "\033[92m",
|
||||
"yellow": "\033[93m",
|
||||
"blue": "\033[94m",
|
||||
"reset": "\033[0m",
|
||||
}
|
||||
|
||||
|
||||
def colorize(text: str, color: str) -> str:
|
||||
return COLORS.get(color, "") + text + COLORS["reset"]
|
||||
|
||||
|
||||
def format_status(tag: str, color: str) -> str:
|
||||
return colorize(f"[{tag}]", color)
|
||||
|
||||
|
||||
def normalize_text(text: str) -> str:
|
||||
return text.replace("\r\n", "\n")
|
||||
|
||||
|
||||
def diff_text(expected: str, actual: str, label: str) -> str:
|
||||
expected_lines = expected.splitlines(keepends=True)
|
||||
actual_lines = actual.splitlines(keepends=True)
|
||||
return "".join(
|
||||
difflib.unified_diff(expected_lines, actual_lines, fromfile=f"{label} (expected)", tofile=f"{label} (actual)")
|
||||
)
|
||||
|
||||
|
||||
def resolve_path(root: Path, raw: str) -> Path:
|
||||
candidate = Path(raw)
|
||||
return candidate if candidate.is_absolute() else root / candidate
|
||||
|
||||
|
||||
def match_patterns(name: str, patterns: Sequence[str]) -> bool:
|
||||
if not patterns:
|
||||
return True
|
||||
for pattern in patterns:
|
||||
if fnmatch.fnmatch(name, pattern) or pattern in name:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def quote_cmd(cmd: Sequence[str]) -> str:
|
||||
return " ".join(shlex.quote(part) for part in cmd)
|
||||
|
||||
|
||||
def is_arm_host() -> bool:
|
||||
machine = platform.machine().lower()
|
||||
return machine.startswith("arm") or machine.startswith("aarch")
|
||||
|
||||
|
||||
def wrap_runtime_command(cmd: List[str]) -> List[str]:
|
||||
if not is_arm_host():
|
||||
return cmd
|
||||
if cmd and cmd[0].endswith("qemu-x86_64"):
|
||||
return cmd
|
||||
return ["qemu-x86_64", *cmd]
|
||||
|
||||
|
||||
def read_json(meta_path: Path) -> Dict[str, Any]:
|
||||
if not meta_path.exists():
|
||||
return {}
|
||||
raw = meta_path.read_text(encoding="utf-8").strip()
|
||||
if not raw:
|
||||
return {}
|
||||
try:
|
||||
data = json.loads(raw)
|
||||
except json.JSONDecodeError as exc:
|
||||
raise ValueError(f"invalid JSON in {meta_path}: {exc}") from exc
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError(f"metadata in {meta_path} must be an object")
|
||||
return data
|
||||
|
||||
|
||||
def read_args_file(path: Path) -> List[str]:
|
||||
if not path.exists():
|
||||
return []
|
||||
text = path.read_text(encoding="utf-8").strip()
|
||||
if not text:
|
||||
return []
|
||||
return shlex.split(text)
|
||||
|
||||
|
||||
def write_text(path: Path, content: str) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(content, encoding="utf-8")
|
||||
|
||||
|
||||
@dataclass
|
||||
class TestCaseConfig:
|
||||
description: Optional[str] = None
|
||||
compile_only: bool = False
|
||||
expect_compile_error: bool = False
|
||||
expected_exit: int = 0
|
||||
skip: bool = False
|
||||
skip_reason: Optional[str] = None
|
||||
env: Dict[str, str] = field(default_factory=dict)
|
||||
args: Optional[List[str]] = None
|
||||
stdin: Optional[str] = None
|
||||
binary: Optional[str] = None
|
||||
tags: List[str] = field(default_factory=list)
|
||||
requires: List[str] = field(default_factory=list)
|
||||
libs: List[str] = field(default_factory=list)
|
||||
|
||||
@classmethod
|
||||
def from_meta(cls, data: Dict[str, Any]) -> "TestCaseConfig":
|
||||
cfg = cls()
|
||||
if not data:
|
||||
return cfg
|
||||
if "description" in data:
|
||||
if not isinstance(data["description"], str):
|
||||
raise ValueError("description must be a string")
|
||||
cfg.description = data["description"].strip() or None
|
||||
if "compile_only" in data:
|
||||
cfg.compile_only = bool(data["compile_only"])
|
||||
if "expect_compile_error" in data:
|
||||
cfg.expect_compile_error = bool(data["expect_compile_error"])
|
||||
if "expected_exit" in data:
|
||||
cfg.expected_exit = int(data["expected_exit"])
|
||||
if "skip" in data:
|
||||
cfg.skip = bool(data["skip"])
|
||||
if "skip_reason" in data:
|
||||
if not isinstance(data["skip_reason"], str):
|
||||
raise ValueError("skip_reason must be a string")
|
||||
cfg.skip_reason = data["skip_reason"].strip() or None
|
||||
if "env" in data:
|
||||
env = data["env"]
|
||||
if not isinstance(env, dict):
|
||||
raise ValueError("env must be an object of key/value pairs")
|
||||
cfg.env = {str(k): str(v) for k, v in env.items()}
|
||||
if "args" in data:
|
||||
args_val = data["args"]
|
||||
if not isinstance(args_val, list) or not all(isinstance(item, str) for item in args_val):
|
||||
raise ValueError("args must be a list of strings")
|
||||
cfg.args = list(args_val)
|
||||
if "stdin" in data:
|
||||
if not isinstance(data["stdin"], str):
|
||||
raise ValueError("stdin must be a string")
|
||||
cfg.stdin = data["stdin"]
|
||||
if "binary" in data:
|
||||
if not isinstance(data["binary"], str):
|
||||
raise ValueError("binary must be a string")
|
||||
cfg.binary = data["binary"].strip() or None
|
||||
if "tags" in data:
|
||||
tags = data["tags"]
|
||||
if not isinstance(tags, list) or not all(isinstance(item, str) for item in tags):
|
||||
raise ValueError("tags must be a list of strings")
|
||||
cfg.tags = list(tags)
|
||||
if "requires" in data:
|
||||
requires = data["requires"]
|
||||
if not isinstance(requires, list) or not all(isinstance(item, str) for item in requires):
|
||||
raise ValueError("requires must be a list of module names")
|
||||
cfg.requires = [item.strip() for item in requires if item.strip()]
|
||||
if "libs" in data:
|
||||
libs = data["libs"]
|
||||
if not isinstance(libs, list) or not all(isinstance(item, str) for item in libs):
|
||||
raise ValueError("libs must be a list of strings")
|
||||
cfg.libs = [item.strip() for item in libs if item.strip()]
|
||||
return cfg
|
||||
|
||||
|
||||
@dataclass
|
||||
class TestCase:
|
||||
name: str
|
||||
source: Path
|
||||
binary_stub: str
|
||||
expected_stdout: Path
|
||||
expected_stderr: Path
|
||||
compile_expected: Path
|
||||
stdin_path: Path
|
||||
args_path: Path
|
||||
meta_path: Path
|
||||
build_dir: Path
|
||||
config: TestCaseConfig
|
||||
|
||||
@property
|
||||
def binary_path(self) -> Path:
|
||||
binary_name = self.config.binary or self.binary_stub
|
||||
return self.build_dir / binary_name
|
||||
|
||||
def runtime_args(self) -> List[str]:
|
||||
if self.config.args is not None:
|
||||
return list(self.config.args)
|
||||
return read_args_file(self.args_path)
|
||||
|
||||
def stdin_data(self) -> Optional[str]:
|
||||
if self.config.stdin is not None:
|
||||
return self.config.stdin
|
||||
if self.stdin_path.exists():
|
||||
return self.stdin_path.read_text(encoding="utf-8")
|
||||
return None
|
||||
|
||||
def description(self) -> str:
|
||||
return self.config.description or ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class CaseResult:
|
||||
case: TestCase
|
||||
status: str
|
||||
stage: str
|
||||
message: str
|
||||
details: Optional[str] = None
|
||||
duration: float = 0.0
|
||||
|
||||
@property
|
||||
def failed(self) -> bool:
|
||||
return self.status == "failed"
|
||||
|
||||
|
||||
class TestRunner:
|
||||
def __init__(self, root: Path, args: argparse.Namespace) -> None:
|
||||
self.root = root
|
||||
self.args = args
|
||||
self.tests_dir = resolve_path(root, args.tests_dir)
|
||||
self.build_dir = resolve_path(root, args.build_dir)
|
||||
self.build_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.main_py = self.root / "main.py"
|
||||
self.base_env = os.environ.copy()
|
||||
self._module_cache: Dict[str, bool] = {}
|
||||
extra_entries = list(DEFAULT_EXTRA_TESTS)
|
||||
if args.extra:
|
||||
extra_entries.extend(args.extra)
|
||||
self.extra_sources = [resolve_path(self.root, entry) for entry in extra_entries]
|
||||
self.cases = self._discover_cases()
|
||||
|
||||
def _discover_cases(self) -> List[TestCase]:
|
||||
sources: List[Path] = []
|
||||
if self.tests_dir.exists():
|
||||
sources.extend(sorted(self.tests_dir.glob("*.sl")))
|
||||
for entry in self.extra_sources:
|
||||
if entry.is_dir():
|
||||
sources.extend(sorted(entry.glob("*.sl")))
|
||||
continue
|
||||
sources.append(entry)
|
||||
|
||||
cases: List[TestCase] = []
|
||||
seen: Set[Path] = set()
|
||||
for source in sources:
|
||||
try:
|
||||
resolved = source.resolve()
|
||||
except FileNotFoundError:
|
||||
continue
|
||||
if not resolved.exists() or resolved in seen:
|
||||
continue
|
||||
seen.add(resolved)
|
||||
case = self._case_from_source(resolved)
|
||||
cases.append(case)
|
||||
cases.sort(key=lambda case: case.name)
|
||||
return cases
|
||||
|
||||
def _case_from_source(self, source: Path) -> TestCase:
|
||||
meta_path = source.with_suffix(".meta.json")
|
||||
config = TestCaseConfig()
|
||||
if meta_path.exists():
|
||||
config = TestCaseConfig.from_meta(read_json(meta_path))
|
||||
try:
|
||||
relative = source.relative_to(self.root).as_posix()
|
||||
except ValueError:
|
||||
relative = source.as_posix()
|
||||
if relative.endswith(".sl"):
|
||||
relative = relative[:-3]
|
||||
return TestCase(
|
||||
name=relative,
|
||||
source=source,
|
||||
binary_stub=source.stem,
|
||||
expected_stdout=source.with_suffix(".expected"),
|
||||
expected_stderr=source.with_suffix(".stderr"),
|
||||
compile_expected=source.with_suffix(".compile.expected"),
|
||||
stdin_path=source.with_suffix(".stdin"),
|
||||
args_path=source.with_suffix(".args"),
|
||||
meta_path=meta_path,
|
||||
build_dir=self.build_dir,
|
||||
config=config,
|
||||
)
|
||||
|
||||
def run(self) -> int:
|
||||
if not self.tests_dir.exists():
|
||||
print("tests directory not found", file=sys.stderr)
|
||||
return 1
|
||||
if not self.main_py.exists():
|
||||
print("main.py missing; cannot compile tests", file=sys.stderr)
|
||||
return 1
|
||||
selected = [case for case in self.cases if match_patterns(case.name, self.args.patterns)]
|
||||
if not selected:
|
||||
print("no tests matched the provided filters", file=sys.stderr)
|
||||
return 1
|
||||
if self.args.list:
|
||||
self._print_listing(selected)
|
||||
return 0
|
||||
results: List[CaseResult] = []
|
||||
for case in selected:
|
||||
result = self._run_case(case)
|
||||
results.append(result)
|
||||
self._print_result(result)
|
||||
if result.failed and self.args.stop_on_fail:
|
||||
break
|
||||
self._print_summary(results)
|
||||
return 1 if any(r.failed for r in results) else 0
|
||||
|
||||
def _print_listing(self, cases: Sequence[TestCase]) -> None:
|
||||
width = max((len(case.name) for case in cases), default=0)
|
||||
for case in cases:
|
||||
desc = case.description()
|
||||
suffix = f" - {desc}" if desc else ""
|
||||
print(f"{case.name.ljust(width)}{suffix}")
|
||||
|
||||
def _run_case(self, case: TestCase) -> CaseResult:
|
||||
missing = [req for req in case.config.requires if not self._module_available(req)]
|
||||
if missing:
|
||||
reason = f"missing dependency: {', '.join(sorted(missing))}"
|
||||
return CaseResult(case, "skipped", "deps", reason)
|
||||
if case.config.skip:
|
||||
reason = case.config.skip_reason or "skipped via metadata"
|
||||
return CaseResult(case, "skipped", "skip", reason)
|
||||
start = time.perf_counter()
|
||||
compile_proc = self._compile(case)
|
||||
if case.config.expect_compile_error:
|
||||
result = self._handle_expected_compile_failure(case, compile_proc)
|
||||
result.duration = time.perf_counter() - start
|
||||
return result
|
||||
if compile_proc.returncode != 0:
|
||||
details = self._format_process_output(compile_proc)
|
||||
duration = time.perf_counter() - start
|
||||
return CaseResult(case, "failed", "compile", f"compiler exited {compile_proc.returncode}", details, duration)
|
||||
updated_notes: List[str] = []
|
||||
compile_status, compile_note, compile_details = self._check_compile_output(case, compile_proc)
|
||||
if compile_status == "failed":
|
||||
duration = time.perf_counter() - start
|
||||
return CaseResult(case, compile_status, "compile", compile_note, compile_details, duration)
|
||||
if compile_status == "updated" and compile_note:
|
||||
updated_notes.append(compile_note)
|
||||
if case.config.compile_only:
|
||||
duration = time.perf_counter() - start
|
||||
if updated_notes:
|
||||
return CaseResult(case, "updated", "compile", "; ".join(updated_notes), details=None, duration=duration)
|
||||
return CaseResult(case, "passed", "compile", "compile-only", details=None, duration=duration)
|
||||
run_proc = self._run_binary(case)
|
||||
if run_proc.returncode != case.config.expected_exit:
|
||||
duration = time.perf_counter() - start
|
||||
message = f"expected exit {case.config.expected_exit}, got {run_proc.returncode}"
|
||||
details = self._format_process_output(run_proc)
|
||||
return CaseResult(case, "failed", "run", message, details, duration)
|
||||
status, note, details = self._compare_stream(case, "stdout", case.expected_stdout, run_proc.stdout, create_on_update=True)
|
||||
if status == "failed":
|
||||
duration = time.perf_counter() - start
|
||||
return CaseResult(case, status, "stdout", note, details, duration)
|
||||
if status == "updated" and note:
|
||||
updated_notes.append(note)
|
||||
stderr_status, stderr_note, stderr_details = self._compare_stream(
|
||||
case,
|
||||
"stderr",
|
||||
case.expected_stderr,
|
||||
run_proc.stderr,
|
||||
create_on_update=True,
|
||||
ignore_when_missing=True,
|
||||
)
|
||||
if stderr_status == "failed":
|
||||
duration = time.perf_counter() - start
|
||||
return CaseResult(case, stderr_status, "stderr", stderr_note, stderr_details, duration)
|
||||
if stderr_status == "updated" and stderr_note:
|
||||
updated_notes.append(stderr_note)
|
||||
duration = time.perf_counter() - start
|
||||
if updated_notes:
|
||||
return CaseResult(case, "updated", "compare", "; ".join(updated_notes), details=None, duration=duration)
|
||||
return CaseResult(case, "passed", "run", "ok", details=None, duration=duration)
|
||||
|
||||
def _compile(self, case: TestCase) -> subprocess.CompletedProcess[str]:
|
||||
cmd = [sys.executable, str(self.main_py), str(case.source), "-o", str(case.binary_path)]
|
||||
for lib in case.config.libs:
|
||||
cmd.extend(["-l", lib])
|
||||
if self.args.verbose:
|
||||
print(f"\n{format_status('CMD', 'blue')} {quote_cmd(cmd)}")
|
||||
return subprocess.run(
|
||||
cmd,
|
||||
cwd=self.root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
env=self._env_for(case),
|
||||
)
|
||||
|
||||
def _run_binary(self, case: TestCase) -> subprocess.CompletedProcess[str]:
|
||||
runtime_cmd = [self._runtime_entry(case), *case.runtime_args()]
|
||||
runtime_cmd = wrap_runtime_command(runtime_cmd)
|
||||
if self.args.verbose:
|
||||
print(f"{format_status('CMD', 'blue')} {quote_cmd(runtime_cmd)}")
|
||||
return subprocess.run(
|
||||
runtime_cmd,
|
||||
cwd=self.root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
env=self._env_for(case),
|
||||
input=case.stdin_data(),
|
||||
)
|
||||
|
||||
def _runtime_entry(self, case: TestCase) -> str:
|
||||
binary = case.binary_path
|
||||
try:
|
||||
rel = os.path.relpath(binary, start=self.root)
|
||||
except ValueError:
|
||||
return str(binary)
|
||||
if rel.startswith(".."):
|
||||
return str(binary)
|
||||
if not rel.startswith("./"):
|
||||
rel = f"./{rel}"
|
||||
return rel
|
||||
|
||||
def _handle_expected_compile_failure(
|
||||
self,
|
||||
case: TestCase,
|
||||
compile_proc: subprocess.CompletedProcess[str],
|
||||
) -> CaseResult:
|
||||
duration = 0.0
|
||||
if compile_proc.returncode == 0:
|
||||
details = self._format_process_output(compile_proc)
|
||||
return CaseResult(case, "failed", "compile", "expected compilation to fail", details, duration)
|
||||
payload = compile_proc.stderr or compile_proc.stdout
|
||||
status, note, details = self._compare_stream(
|
||||
case,
|
||||
"compile",
|
||||
case.compile_expected,
|
||||
payload,
|
||||
create_on_update=True,
|
||||
)
|
||||
if status == "failed":
|
||||
return CaseResult(case, status, "compile", note, details, duration)
|
||||
if status == "updated":
|
||||
return CaseResult(case, status, "compile", note, details=None, duration=duration)
|
||||
return CaseResult(case, "passed", "compile", "expected failure observed", details=None, duration=duration)
|
||||
|
||||
def _check_compile_output(
|
||||
self,
|
||||
case: TestCase,
|
||||
compile_proc: subprocess.CompletedProcess[str],
|
||||
) -> Tuple[str, str, Optional[str]]:
|
||||
if not case.compile_expected.exists() and not self.args.update:
|
||||
return "skipped", "", None
|
||||
payload = self._collect_compile_output(compile_proc)
|
||||
if not payload and not case.compile_expected.exists():
|
||||
return "skipped", "", None
|
||||
return self._compare_stream(
|
||||
case,
|
||||
"compile",
|
||||
case.compile_expected,
|
||||
payload,
|
||||
create_on_update=True,
|
||||
)
|
||||
|
||||
def _compare_stream(
|
||||
self,
|
||||
case: TestCase,
|
||||
label: str,
|
||||
expected_path: Path,
|
||||
actual_text: str,
|
||||
*,
|
||||
create_on_update: bool,
|
||||
ignore_when_missing: bool = False,
|
||||
) -> Tuple[str, str, Optional[str]]:
|
||||
normalized_actual = normalize_text(actual_text)
|
||||
actual_clean = normalized_actual.rstrip("\n")
|
||||
if not expected_path.exists():
|
||||
if ignore_when_missing:
|
||||
return "passed", "", None
|
||||
if self.args.update and create_on_update:
|
||||
write_text(expected_path, normalized_actual)
|
||||
return "updated", f"created {expected_path.name}", None
|
||||
details = normalized_actual or None
|
||||
return "failed", f"missing expectation {expected_path.name}", details
|
||||
expected_text = normalize_text(expected_path.read_text(encoding="utf-8"))
|
||||
expected_clean = expected_text.rstrip("\n")
|
||||
if expected_clean == actual_clean:
|
||||
return "passed", "", None
|
||||
if self.args.update and create_on_update:
|
||||
write_text(expected_path, normalized_actual)
|
||||
return "updated", f"updated {expected_path.name}", None
|
||||
diff = diff_text(expected_text, normalized_actual, label)
|
||||
if not diff:
|
||||
diff = f"expected:\n{expected_text}\nactual:\n{normalized_actual}"
|
||||
return "failed", f"{label} mismatch", diff
|
||||
|
||||
def _collect_compile_output(self, proc: subprocess.CompletedProcess[str]) -> str:
|
||||
parts: List[str] = []
|
||||
if proc.stdout:
|
||||
parts.append(proc.stdout)
|
||||
if proc.stderr:
|
||||
if parts and not parts[-1].endswith("\n"):
|
||||
parts.append("\n")
|
||||
parts.append(proc.stderr)
|
||||
return "".join(parts)
|
||||
|
||||
def _env_for(self, case: TestCase) -> Dict[str, str]:
|
||||
env = dict(self.base_env)
|
||||
env.update(case.config.env)
|
||||
return env
|
||||
|
||||
def _module_available(self, module: str) -> bool:
|
||||
if module not in self._module_cache:
|
||||
self._module_cache[module] = importlib.util.find_spec(module) is not None
|
||||
return self._module_cache[module]
|
||||
|
||||
def _format_process_output(self, proc: subprocess.CompletedProcess[str]) -> str:
|
||||
parts = []
|
||||
if proc.stdout:
|
||||
parts.append("stdout:\n" + proc.stdout.strip())
|
||||
if proc.stderr:
|
||||
parts.append("stderr:\n" + proc.stderr.strip())
|
||||
return "\n\n".join(parts) if parts else "(no output)"
|
||||
|
||||
def _print_result(self, result: CaseResult) -> None:
|
||||
tag_color = {
|
||||
"passed": (" OK ", "green"),
|
||||
"updated": ("UPD", "blue"),
|
||||
"failed": ("ERR", "red"),
|
||||
"skipped": ("SKIP", "yellow"),
|
||||
}
|
||||
label, color = tag_color.get(result.status, ("???", "red"))
|
||||
prefix = format_status(label, color)
|
||||
if result.status == "failed" and result.details:
|
||||
message = f"{result.case.name} ({result.stage}) {result.message}"
|
||||
elif result.message:
|
||||
message = f"{result.case.name} {result.message}"
|
||||
else:
|
||||
message = result.case.name
|
||||
print(f"{prefix} {message}")
|
||||
if result.status == "failed" and result.details:
|
||||
print(textwrap.indent(result.details, " "))
|
||||
|
||||
def _print_summary(self, results: Sequence[CaseResult]) -> None:
|
||||
total = len(results)
|
||||
passed = sum(1 for r in results if r.status == "passed")
|
||||
updated = sum(1 for r in results if r.status == "updated")
|
||||
skipped = sum(1 for r in results if r.status == "skipped")
|
||||
failed = sum(1 for r in results if r.status == "failed")
|
||||
print()
|
||||
print(f"Total: {total}, passed: {passed}, updated: {updated}, skipped: {skipped}, failed: {failed}")
|
||||
if failed:
|
||||
print("\nFailures:")
|
||||
for result in results:
|
||||
if result.status != "failed":
|
||||
continue
|
||||
print(f"- {result.case.name} ({result.stage}) {result.message}")
|
||||
if result.details:
|
||||
print(textwrap.indent(result.details, " "))
|
||||
|
||||
|
||||
def parse_args(argv: Optional[Sequence[str]] = None) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Run L2 compiler tests")
|
||||
parser.add_argument("patterns", nargs="*", help="glob or substring filters for test names")
|
||||
parser.add_argument("--tests-dir", default="tests", help="directory containing .sl test files")
|
||||
parser.add_argument("--build-dir", default="build", help="directory for compiled binaries")
|
||||
parser.add_argument("--extra", action="append", help="additional .sl files or directories to treat as tests")
|
||||
parser.add_argument("--list", action="store_true", help="list tests and exit")
|
||||
parser.add_argument("--update", action="store_true", help="update expectation files with actual output")
|
||||
parser.add_argument("--stop-on-fail", action="store_true", help="stop after the first failure")
|
||||
parser.add_argument("-v", "--verbose", action="store_true", help="show compiler/runtime commands")
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: Optional[Sequence[str]] = None) -> int:
|
||||
args = parse_args(argv)
|
||||
runner = TestRunner(Path(__file__).resolve().parent, args)
|
||||
return runner.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/alloc.sl -o ./build/alloc > /dev/null && ./build/alloc
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/arr_dynamic.sl -o ./build/arr_dynamic > /dev/null && ./build/arr_dynamic
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/bss_override.sl -o ./build/bss_override > /dev/null && ./build/bss_override
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/core_bitops.sl -o ./build/core_bitops > /dev/null && ./build/core_bitops
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/else_if_shorthand.sl -o ./build/else_if_shorthand > /dev/null && ./build/else_if_shorthand
|
||||
@@ -1 +0,0 @@
|
||||
hello stderr
|
||||
1
tests/eputs.stderr
Normal file
1
tests/eputs.stderr
Normal file
@@ -0,0 +1 @@
|
||||
hello stderr
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/eputs.sl -o ./build/eputs > /dev/null && ./build/eputs 2>&1
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/fib.sl -o ./build/fib > /dev/null && ./build/fib
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/goto.sl -o ./build/goto > /dev/null && ./build/goto
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/hello.sl -o ./build/hello > /dev/null && ./build/hello
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/here.sl -o ./build/here > /dev/null && ./build/here
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/inline.sl -o ./build/inline > /dev/null && ./build/inline
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/integration_core.sl -o ./build/integration_core > /dev/null && ./build/integration_core
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/io_read_file.sl -o ./build/io_read_file > /dev/null && ./build/io_read_file
|
||||
1
tests/io_read_stdin.stdin
Normal file
1
tests/io_read_stdin.stdin
Normal file
@@ -0,0 +1 @@
|
||||
stdin via test
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/io_read_stdin.sl -o ./build/io_read_stdin > /dev/null && printf 'stdin via test\n' | ./build/io_read_stdin
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/io_write_buf.sl -o ./build/io_write_buf > /dev/null && ./build/io_write_buf
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/io_write_file.sl -o ./build/io_write_file > /dev/null && ./build/io_write_file
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/jmp_test.sl -o ./build/jmp_test > /dev/null && ./build/jmp_test
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/list.sl -o ./build/list > /dev/null && ./build/list
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/loop_while.sl -o ./build/loop_while > /dev/null && ./build/loop_while
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/loops_and_cmp.sl -o ./build/loops_and_cmp > /dev/null && ./build/loops_and_cmp
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/mem.sl -o ./build/mem > /dev/null && ./build/mem
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/override_dup_compile_time.sl -o ./build/override_dup_compile_time > /dev/null && ./build/override_dup_compile_time
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/rule110.sl -o ./build/rule110 > /dev/null && ./build/rule110
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/str.sl -o ./build/str > /dev/null && ./build/str
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/string_puts.sl -o ./build/string_puts > /dev/null && ./build/string_puts
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/syscall_write.sl -o ./build/syscall_write > /dev/null && ./build/syscall_write
|
||||
@@ -1,87 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
import sys
|
||||
import os
|
||||
import subprocess
|
||||
import platform
|
||||
import re
|
||||
|
||||
COLORS = {
|
||||
"red": "\033[91m",
|
||||
"green": "\033[92m",
|
||||
"yellow": "\033[93m",
|
||||
"blue": "\033[94m",
|
||||
"reset": "\033[0m"
|
||||
}
|
||||
|
||||
def print_colored(text, color):
|
||||
print(COLORS.get(color, "") + text + COLORS["reset"], end="")
|
||||
|
||||
def _is_arm_host():
|
||||
machine = platform.machine().lower()
|
||||
return machine.startswith("arm") or machine.startswith("aarch")
|
||||
|
||||
def _wrap_qemu_for_arm(command):
|
||||
if "qemu-x86_64" in command:
|
||||
return command
|
||||
pattern = re.compile(r"(^|\s*(?:&&|;)\s*)(\./\S+)")
|
||||
|
||||
def _repl(match):
|
||||
prefix = match.group(1)
|
||||
binary = match.group(2)
|
||||
return f"{prefix}qemu-x86_64 {binary}"
|
||||
|
||||
return pattern.sub(_repl, command)
|
||||
|
||||
def run_tests():
|
||||
test_dir = "tests"
|
||||
any_failed = False
|
||||
|
||||
if not os.path.isdir(test_dir):
|
||||
print("No 'tests' directory found.")
|
||||
return 1
|
||||
|
||||
for file in sorted(os.listdir(test_dir)):
|
||||
if file.endswith(".test"):
|
||||
test_path = os.path.join(test_dir, file)
|
||||
expected_path = test_path.replace(".test", ".expected")
|
||||
|
||||
if not os.path.isfile(expected_path):
|
||||
print(f"Missing expected output file for {file}")
|
||||
any_failed = True
|
||||
continue
|
||||
|
||||
with open(test_path, "r") as test_file:
|
||||
command = test_file.read().strip()
|
||||
|
||||
with open(expected_path, "r") as expected_file:
|
||||
expected_output = expected_file.read().strip()
|
||||
|
||||
try:
|
||||
run_command = _wrap_qemu_for_arm(command) if _is_arm_host() else command
|
||||
result = subprocess.run(run_command, shell=True, text=True, capture_output=True)
|
||||
actual_output = result.stdout.strip()
|
||||
stderr_output = result.stderr.strip()
|
||||
|
||||
if result.returncode == 0 and actual_output == expected_output:
|
||||
print_colored("[OK] ", "green")
|
||||
print(f"{file} passed")
|
||||
else:
|
||||
print_colored("[ERR] ", "red")
|
||||
print(f"{file} failed (exit {result.returncode})")
|
||||
print(f"Expected:\n{expected_output}")
|
||||
print(f"Got:\n{actual_output}")
|
||||
if stderr_output:
|
||||
print(f"Stderr:\n{stderr_output}")
|
||||
any_failed = True
|
||||
|
||||
except Exception as e:
|
||||
print_colored(f"Error running {file}: {e}", "red")
|
||||
any_failed = True
|
||||
|
||||
print("All tests passed." if not any_failed else "Some tests failed.")
|
||||
|
||||
return 1 if any_failed else 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(run_tests())
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/typeconversion.sl -o ./build/typeconversion > /dev/null && ./build/typeconversion
|
||||
@@ -1 +0,0 @@
|
||||
python main.py tests/with_variables.sl -o ./build/with_variables > /dev/null && ./build/with_variables
|
||||
Reference in New Issue
Block a user