`actions/upload-artifact@v4` comes with the following [breaking
change](https://github.com/actions/upload-artifact?tab=readme-ov-file#breaking-changes):
"Due to how Artifacts are created in this new version, it is no longer
possible to upload to the same named Artifact multiple times. You must
either split the uploads into multiple Artifacts with different names,
or only upload once. Otherwise you will encounter an error."
Due to that we cannot copy multiple blob report folders into the same
artifact name and rely on the action to merge them. Instead, as
suggested by their migration guide, we upload each blob report into a
uniquely named artifact with prefix `blob-report-` and then download all
of them into same directory.
This version change also affects how we store pull_request_number.txt
into an artifact. Previously we relied on the fact that uploading
artifact with the same name would silently override existing one, but
now it's an error. To overcome that, we upload PR number file into
uniquely named artifacts `pull-request-*` and later extract them into
same location with `unzip -n` which will never override existing file,
so we end up with single `pull_request_number.txt`.
Reference #28800
- remove error details from the reports
- collapse flaky tests by default
- limit comment to 65365 character
GitHub API has comment length limit 65536 chars:
```
Unhandled error: HttpError: Validation Failed: {"resource":"IssueComment","code":"unprocessable","field":"data","message":"Body is too long (maximum is 65536 characters)"}
```
The credentials are created with
`az ad sp create-for-rbac --name "playwright-github-actions" --role
"Storage Blob Data Contributor" --scopes /subscriptions/<subscription
id>/resourceGroups/<resource
group>/providers/Microsoft.Storage/storageAccounts/<storage account>`
We cannot use `azure/login@v1` for login as it does not see to properly
propagate credentials to `azcopy` in the next step (there are some
reports about keyring problems on linux based actions).
Also:
- remove `blob-report` directory at the start;
- markdown's `report.md` next to package.json;
- use default location in playwright's workflows.
References #24451.
Since all resources are uploaded in the separate workflow anyway there
is not much point in passing the options and uploading blob reports as a
separate step.
* Fix report downloading from Azure (reports are now zipped)
* Extracted upload logic into an action
* Extracted PR number file generation into its own job
Fixes
```
Notice: Report url: https://mspwblobreport.z1.web.core.windows.net/run-5533005176-1-a0b0752662f8af5f841ff7a65b04d02066474ff2/index.html
ReferenceError: fs is not defined
at eval (eval at callAsyncFunction (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:15143:16), <anonymous>:30:18)
at callAsyncFunction (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:15144:12)
at main (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:15236:26)
at /home/runner/work/_actions/actions/github-script/v6/dist/index.js:15217:1
at /home/runner/work/_actions/actions/github-script/v6/dist/index.js:15268:3
at Object.<anonymous> (/home/runner/work/_actions/actions/github-script/v6/dist/index.js:152[71](https://github.com/microsoft/playwright/actions/runs/5533535965/jobs/10097205178#step:12:72):12)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
```
The check summary has a link to the report and a link to the merge
workflow run. Otherwise it's very hard to tell which merge workflow
corresponds to given PR.
Downloading 457Mb of reports with traces (for tracing tests) takes >3
minutes, uploading it to Azure takes >5 minutes which easily exceeds 10
minutes budget.
For some reason pull_requests field on workflow_run is empty for pull
requests created from branches in forked repositories, see
https://github.com/orgs/community/discussions/25220. As a workaround we
store triggering pull request number in a file.
* Moved report merging and publishing logic into create_test_report.yml
shared between all workflows
* Merged reports are now published for try jobs on pull requests too. In
order to achieve that the logic had to be extracted into a separate
workflow triggered by
[workflow_run](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run),
this way it can access secrets even if the original workflow was not
able to.
* The blob report data flow is different depending on whether the
workflow is triggered by a pull request or a push:
- For `pull_request` the workflow doesn't have access to the secrets it
uploads the blob report to the GitHub
artifact storage. Later on the merge workflow uploads that blob report
to Azure blob storage.
- Workflows triggered by `push` event can read secrets. They upload blob
report directly to Azure blob storage
and the merge workflow downloads the report from there rather than from
GitHub artifacts.