Diagnose your environment (new or upgraded)

Multiple factors could create issues while working with Aggregation Service, including reports formatting, output domain issues, and coordinator issues. It's important to understand the error source and any metadata it contains to accurately diagnose the problem.

Guide topics:

Verify client measurement API setup

After you have verified your origin server has been properly registered, complete the following steps:

  1. Check how you are triggering reports. Confirm that you are receiving the correct report format according to which API is being used:

    • Attribution Reporting API
    • Private Aggregation API
      • Reporting in the Private Aggregation API can be completed using the contributeToHistogram function. Ensure that you are passing the bucket key and value. The bucket key should be in BigInt format. (Read more Private Aggregation API)
  2. If you are triggering reports as recommended, but still getting the issue, check if there are errors observed in the Chrome Developer Console in both the "Console" and "Network" tab.

If you need further troubleshooting support for these client APIs, continue on to our debugging guidance for Attribution Reporting API and Private Aggregation API + Shared Storage.

Troubleshooting your reporting origin setup

The reporting origin server is where you have declared the correct corresponding .well-known endpoints where aggregatable reports will be sent to. Verify that your deployed reporting origin server has been properly enrolled and registered.

Is your reporting origin receiving reports?

Verify that your deployed reporting origin server has been properly enrolled and registered. This server is where you have declared the correct corresponding .well-known endpoints where aggregatable reports will be sent to.

Client-side measurement API Matching aggregatable endpoint
Attribution Reporting POST /.well-known/attribution-reporting/report-aggregate-attribution
Private Aggregation + Shared Storage (Combo) POST /.well-known/private-aggregation/report-shared-storage
Private Aggregation + Protected Audience (Combo) POST /.well-known/private-aggregation/report-protected-audience

After you've verified your origin server has been properly registered, complete the following steps:

  1. Check how you are triggering reports. Confirm that you are receiving the correct report format according to which API is being used:

  2. If you are triggering reports as recommended but still getting the issue, check if there are errors observed in the Chrome Developer Console in both the "Console" and "Network" tab.

If you need further troubleshooting support for these client APIs , continue the debugging guidance for Attribution Reporting API and Private Aggregation API + Shared Storage.

Troubleshooting your aggregate reports

Aggregate reports are generated by the client-side measurement APIs and sent to your reporting origin. These reports should be converted to AVRO format by your reporting endpoint. If there are issues with this conversion, or if the reports themselves are not intact, you may see errors in the Aggregation Service.

Are your aggregatable reports converting correctly?

Verify that your reporting endpoint (.well-known/…) is converting the given aggregatable JSON report correctly into AVRO.

API errors that would crop up due to this issue are the following:

Error DECRYPTION_ERROR
Example
                "result_info": {
                    "return_code": "REPORTS_WITH_ERRORS_EXCEEDED_THRESHOLD",
                    "return_message": "Aggregation job failed early because the number of reports excluded from aggregation exceeded threshold.",
                    "error_summary": {
                        "error_counts": [
                            {
                                "category": "DECRYPTION_ERROR",
                                "count": 1,
                                "description": "Unable to decrypt the report. This may be caused by: tampered aggregatable report shared info, corrupt encrypted report, or other such issues."
                            },
                            {
                                "category": "NUM_REPORTS_WITH_ERRORS",
                                "count": 1,
                                "description": "Total number of reports that had an error. These reports were not considered in aggregation. See additional error messages for details on specific reasons."
                            }
                        ],
                        "error_messages": []
                    }
                }
            
Check This can occur due to decryption errors, which can be caused by AVRO reports not generated correctly whether it is the Aggregatable AVRO reports or the output domain AVRO. Are the Aggregatable AVRO reports generated correctly? Payload will need to be base64 decoded and converted into a byte array. Ensure that the report is in avro format. In addition, check if the output domain AVRO is correct. Buckets are converted to escaped unicode hex format and then converted into a byte array. If you see more than one error count, you can find out more about the errors in the Aggregation Service GitHub page.
Error DECRYPTION_KEY_NOT_FOUND
Example
                "result_info": {
                    "return_code": "REPORTS_WITH_ERRORS_EXCEEDED_THRESHOLD",
                    "return_message": "Aggregation job failed early because the number of reports excluded from aggregation exceeded threshold.",
                    "error_summary": {
                        "error_counts": [{
                            "category": "DECRYPTION_KEY_NOT_FOUND",
                            "count": 1,
                            "description": "Could not find decryption key on private key endpoint."
                        }, {
                            "category": "NUM_REPORTS_WITH_ERRORS",
                            "count": 1,
                            "description": "Total number of reports that had an error. These reports were not considered in aggregation. See additional error messages for details on specific reasons."
                        }],
                        "error_messages": []
                    }
                }
            
Check Attribution Reporting API

For Attribution Reporting API, this error may be caused by an issue with the trigger registration. Check that they have registered their trigger with the correct cloud using the aggregation_coordinator_origin field (instructions here). You may also be providing AWS-encrypted reports to their Google Cloud deployment of Aggregation Service, or Google Cloud-encrypted reports to their AWS deployment. Ask them to validate which public key endpoint was used to encrypt the aggregatable reports. For Google Cloud, the `aggregation_coordinator_origin` field in the aggregatable report should be https://publickeyservice.msmt.gcp.privacysandboxservices.com. For AWS, it's https://publickeyservice.msmt.aws.privacysandboxservices.com.

Private Aggregation API

For the Private Aggregation API, you will have to define the `aggregationCoordinatorOrigin` using the example in Aggregation coordinator choice section in the Private Aggregation API explainer. Please specify https://publickeyservice.msmt.gcp.privacysandboxservices.com as the aggregationCoordinatorOrigin.

For example:

                sharedStorage.run('someOperation', {'privateAggregationConfig':
                {'aggregationCoordinatorOrigin': ' https://publickeyservice.msmt.gcp.privacysandboxservices.com'}});

            
Error DECRYPTION_KEY_FETCH_ERROR
Example
                "result_info": {
                        "return_code": "REPORTS_WITH_ERRORS_EXCEEDED_THRESHOLD",
                        "return_message": "Aggregation job failed early because the number of reports excluded from aggregation exceeded threshold.",
                        "error_summary": {
                            "error_counts": [
                                {
                                    "category": "DECRYPTION_KEY_FETCH_ERROR",
                                    "count": 1,
                                    "description": "Fetching the decryption key for report decryption failed. This can happen using an unapproved aggregation service binary, running the aggregation service binary in debug mode, key corruption or service availability issues."
                                },
                                {
                                    "category": "NUM_REPORTS_WITH_ERRORS",
                                    "count": 1,
                                    "description": "Total number of reports that had an error. These reports were not considered in aggregation. See additional error messages for details on specific reasons."
                                }
                            ]
                        }
                }
            
Check In case of unapproved binary or debug mode issues, using the right binary will fix the issue. Follow instruction here to use prebuilt AMI or self-build your AMI.

Complete the following steps to verify:

  1. You can use the aggregatable_report_converter tool to convert the aggregatable reports that you collected from the .well-known endpoint to AVRO, and create the output domain keys. (Note: Output domain files should be a 16-byte big-endian bytestring.)

  2. Follow the steps in the codelab for your public cloud provider to collect your debug reports and run an Aggregation Service job using your output domain keys: a. Google Cloud: follow steps 3.1.2 to 3.2.3 of the Aggregation Service Google Cloud Codelab b. Amazon Web Services: follow steps 4.2 to 5.3 of the Aggregation Service AWS Codelab.

If this returns a SUCCESS response, your conversion is working.

Are your aggregatable reports intact?

Verify that your aggregate report, output domain keys, and shared info are intact. Refer to the sample codes to convert aggregatable reports and create domain files if you would like further information.

API errors that you may see that correspond to this issue are the following:

Error INPUT_DATA_READ_FAILED
Endpoint createJob
Check Is the input_data_bucket_name, input_data_blob_prefix, output_data_bucket_name and output_data_blob_prefix field in the createJob request correct? Does the input report data location have the reports to be processed? Do you have permission to read from the storage location for the reports and output domain?

Complete the following steps to verify:

  1. Verify aggregate report:

    • Generate aggregate reports and use aggregatable_report_converter tool to convert the output domain into AVRO format.
    • Run a createJob request with this aggregatable report and output domain file.
    • If this returns SUCCESS, it means the aggregatable report is intact. If this returns an error, either you have an issue with your aggregatable report or both report and domain.
    • Proceed to check the domain file in the next step.
  2. Verify output domain file:

    • Generate output domain file and use the aggregatable_report_converter tool to create the aggregatable report.
    • Run a createJob request with this aggregatable report and output domain file.
    • If this returns SUCCESS, it means the output domain is intact and there is an issue with your code to create the aggregatable report.
    • Continue to the next step to check the shared_info.
  3. Verify shared info:

    • Ensure you have debug enabled reports. Debug enabled reports will have a debug_cleartext_payload field available.
    • Create a debug report for use with the local testing tool and use debug_cleartext_payload as the payload.
    • Run the local testing tool with your domain file. If this is a SUCCESS, it means your shared_info file is tampered with.

If you suspect any further errors or tampering, collect the JSON aggregate report, domain key, generated aggregated AVRO report, and output domain, and continue to the next steps.

Inspect your new deployment version

Verify that your version of Aggregation Service is still being supported. Once you have determined which version you are using, check the list of Aggregation Service releases and confirm that your version does not have the end of support warning: This release has reached its end of support on { date }. The following instructions for determining which version you have deployed are for the supported public clouds.

Steps for Google Cloud

  1. Navigate to where your Compute Engine > VM instances.
  2. Click into the virtual machine instance with -worker- in the name.
  3. Find the Custom Metadata section and then locate the key tee-image-reference.
  4. The value of tee-image-reference will contain the version number. For example, the version number of the following path is v2.9.1. These are prebuilt images that live in a Google Cloud project's Artifact Registry.
    • Note: This is relevant if you are using the prebuilt assets, if you are not - this should match what you have personally named and tagged your image. (example: us.docker.pkg.dev/<gcp_project_name>/artifacts:aggregation-service- container-artifacts-worker_mp_go_prod:2.9.1)

Steps for Amazon Web Services

  1. Navigate to EC2 Instances in your Amazon Web Services console.
  2. Click the instance with the name aggregation-service-operator-dev-env.
  3. On the instance page, find Details > AMI (Amazon Machine Image)
  4. Your version name should be included in the image path. For example, the version number of the following path is v2.9.1.
    • Note: This is relevant if you are using the prebuilt assets, if you are not - this should match what you have personally named and tagged your image. (example: aggregation-service-enclave_2.9.1--2024-10-03T01-24-25Z)

Next Steps

If you don't see a resolution to your Aggregation Service issue, notify us by filing a GitHub issue or submitting the technical support form.