We used to not report fatal error and hang forever because worker
did not run any tests but also did not report any errors.
Also properly show stack-less errors.
This is a follow-up fix to microsoft/playwright#8385.
Testing options are limited right now, but this change was confirmed
with a client running on my physical machine and a LaunchServer running
in a Docker container.
Before this change, the har.spec.ts only passed when the client and
server were on the some filesystem.
microsoft/playwright#8450 will likely give us options to test this in an
automated way in the official CI suite.
This change ensure's the HAR file is saved at `recordHar.path` on the
client instead of the server.
NB: The goal was to make this change transparent to the user and NOT
introduce any new APIs. Namely, I want to leave the API open for
potential `context.har.start()` and `context.har.stop()`.
This does BREAK servers that expect the HAR to be at the `recordHar.path`
on the server, but I think that's OK since there haven't been reports
of missing HAR on client making me think not many users are getting
HAR with client and server on different hosts anyways.
Closes#8355
This makes `test.fail` tests considered as passing when they actually fail:
- Stop restarting the worker.
- Retry when it passes instead of a fail.
- Behaves similar to regular tests in a `describe.serial` suite.
chore: migrate tracing to har
- `HarTracer` is used by both `HarRecorder` that implements
`recordHar` context option, and by tracing.
- We keep the `trace.network` format for now, so it is not
yet a valid har file, but it contains har entries.
Instead of filtering the whole trace file on export, we write
into separate trace file for each chunk. We also write a separate
trace.network file with all resources, because it is reused between
chunks.
This brings us towards `tracing.startFile()/stopFile()` api.
fix(test runner): avoid internal error for step end without begin
Consider the following scenario:
- Test finishes and starts tearing down fixtures.
- Fixture teardown starts a step S and then times out.
- We declare the test finished (with timeout).
- Dispatcher shuts down the worker and spins a new one for a retry.
Additionally, it clears steps information for the test to be
ready for the new retry. Step S information is lost.
- Meanwhile, during worker teardown, the step S does
actually finish (usually with an error), and we send stepEnd for S.
- Dispatcher does not know what to do with step S end and
prints an internal error.
The fix is to ignore certain messages from the shutting down worker that failed.
When browser receives multiple header values for the same header name,
we present them as LF-separated value. This is not considered valid in
Node, so we should split by LF when serving a snapshot.
There more invalid characters in headers, so just in case we try/catch it.
- Simplify by only considering client/ vs non-client/
- Fix stack traces when calling from other playwright code, e.g. from the cli
- Account for re-entrant calls that happen when
instrumenting context creation/desctruction
- Add tests
- Fix StackTraceView on Windows
When sharing a context between tests and using `'on-first-retry'` we
could end up with tracing still running in non-retried tests. That's
extra overhead without a reason.
Without this, Playwright's CDP feature leaves unreachable
targets (namely OOPIFs).
This change allows for more advanced experimentation in user-land
without relying on out-of-band CDP connections and clients.
Now you can, for example, call `DOM.getDocument` on the
page OR main frame, observe there is an iframe node with
no `contentDocument` (i.e. OOPIF), make note of the referenced
`frameId`, and then iterate of page.frames() calling `Target.getInfo`
on each to link the Playwright Frame with the CDP `frameId` and
then recurse.
Relates #8113
Using a worker fixture forces a new worker. This might be unexpected
when part of the test file runs in one worker, and another runs
in another worker. Top-level use of worker fixtures is still fine.