
Study & contribute to Bitcoin and lightning open source

Hands on, guided learning to make you confident in bitcoin development.

List of good first issues from curated bitcoin FOSS projects

Interactive AI chat to learn about Bitcoin technology and its history

Technical Bitcoin search engine

Daily summary of key Bitcoin tech development discussions and updates

Engaging Bitcoin dev intro for coders using technical texts and code challenges

Review technical Bitcoin transcripts and earn sats
On PRs, Drahtbot provides a link to CoreCheck results. That's probably most people's interaction with CoreCheck.
Using gcc at the moment, too many false positives so probably will need to move to clang. It doesn't seem to be used very regularly atm (getting few reports when the tool is down).
Why not being used? Attendants reported unreliability to be the main reason: if it fails once, you stop looking at it. Also, slow: pages would load slowly. Also, when PRs are updated, the results need ~an hour to be updated.
The tool shows how PRs change coverage, specifically which lines are affected.
The homepage used to be quite slow, using datadog in the backend, which is triggering tracking prevention in browsers, so it's now routed through a proxy. The proxy is a lambda, which is not always up, adding a lag to initial homepage load in case of cold start.
Homepage has a couple of vanity metrics - look nice but not super useful.
Datadog is used for legacy reasons, no real reason couldn't use e.g. Grafana. Datadog doesn't differentiate between prod and dev.
Will highlight if PR is >10% slower than master, which works quite well. Less happy about the SonarQube loud code smell tests, generally not too useful. Also measures static binary size.
One attendant suggested the overview is quite dense, maybe adding tabs / letting the user choose the information they see. Also, if only one component of the website is unreliable, that makes people doubt the whole website/project.
Only for some files, not the whole codebase. Coverage slowly increasing.
You can see source code. Red line: unkilled mutant, i.e. that change does not cause any tests to fail.
Tests are ran every Friday, takes approx 20 hours to run. Slowness comes mostly from functional test suite, tests taking > minutes to run. Much more of a factor than compilation, which is quite fast thanks to ccache. A manual selection of tests is ran, that are deemed relevant to the code covered. We can also compile this list from test coverage, but e.g. for coinselection that doesn't work. A lot of tests use coinselection, but that doens't mean they test coinselection logic.
A good example of a PR that fixed a bug from mutation testing: http://github.com/bitcoin/bitcoin/pull/33047
One attendant suggested it would be nice to be able to run mutation testing for a single PR. The issue here is that it just takes a really long time to run even for a small number of files. However, if a PR changes ~20 lines of code, we should be able to get much more specific with the mutation testing. People can easily run it locally, see https://github.com/brunoerg/bcore-mutation
The tool tried to avoid useless mutations based on regex (e.g. logging, ...). The tool has support for generating mutants only for LoC that have test coverage, because otherwise it just won't catch anything anyway.
The mutation testing could be extended to cover libsecp256k1.
A recent paper indicated that it's also helpful to mutate the tests themselves, not just the tested code.
Community-maintained archive to unlocking knowledge from technical Bitcoin transcripts




