Integrations
Red Hat OpenShift Pipelines
Overview
This document provides a detailed guide to integrating Red Hat OpenShift Pipelines with Callgoose SQIBS for real-time Incident Management, Incident Auto Remediation, Event-Driven Automation, and other automation purposes. The integration enables automatic creation, updating, and resolution of incidents in Callgoose SQIBS based on alerts triggered in Red Hat OpenShift Pipelines. The guide includes steps for setting up pipelines, configuring webhook notifications, creating API filters in Callgoose SQIBS, and troubleshooting.
Prerequisites
- Red Hat OpenShift Pipelines Account: Access to OpenShift Pipelines for creating and managing pipelines.
- Callgoose SQIBS Account: With valid privileges to set up API filters and receive notifications.
- Webhook/API Endpoint: Available in Callgoose SQIBS to receive alerts from OpenShift Pipelines.
1. Obtain API Token and Endpoint Details
To integrate with Callgoose SQIBS, you first need to obtain an API token and find the API endpoint details.
- Generate an API Token:
- Follow the guide on How to Create API Token in Callgoose SQIBS.
- Find the API Endpoint:
- Refer to the Callgoose SQIBS API Endpoint Documentation to get the endpoint details where the JSON payloads from Red Hat OpenShift Pipelines will be sent.
2. Debugging and Troubleshooting
You can enable debugging in the API tokens used with OpenShift Pipelines notifications for troubleshooting purposes.
- Enable Debugging:
- You can update the debug value when adding or updating an API token.
- When API tracking is enabled, logs are stored in the API log section for your review. The debugging option will automatically disable after 48 hours.
- When API tracking is turned off, no logs are saved in the API log.
- Using API Log for Troubleshooting:
- The API log provides detailed information on all API calls made to Callgoose SQIBS.
- You can check the JSON values in each API log entry for troubleshooting purposes.
- Use the information in the API log to create or refine API filters to ensure incidents are created correctly based on the API payloads received.
- Callgoose SQIBS creates incidents according to your API filter configuration, giving you full control over how alerts from different services trigger incidents and alerts for your support team or automation processes.
3. Configuring Red Hat OpenShift Pipelines to Send JSON Payloads
To configure OpenShift Pipelines to generate the JSON payloads similar to the examples provided, follow the steps outlined below. These steps will guide you through setting up the necessary pipelines and webhook notifications within OpenShift Pipelines to ensure that the JSON payloads match those expected by Callgoose SQIBS.
3.1 Setting Up Pipelines in OpenShift Pipelines
To generate the required JSON payloads, you first need to set up pipelines within OpenShift Pipelines.
- Log in to the OpenShift Console:
- Access the Red Hat OpenShift platform using your account credentials.
- Navigate to the Pipelines Section:
- In the OpenShift console, go to the Pipelines section under the Developer view.
- Create a New Pipeline:
- Click on Create Pipeline to start building a new pipeline.
- Specify the tasks and steps within the pipeline, defining triggers for each step.
- Configure the Notification Method:
- Set up a Task that sends a webhook notification to Callgoose SQIBS when the pipeline status changes (e.g., success, failure).
3.2 Configuring the Webhook Notification
To ensure that the JSON payload sent matches the examples provided, follow these steps when configuring the webhook:
- Add Webhook URL:
- In the Webhook URL field, enter the endpoint provided by Callgoose SQIBS.
- Ensure the protocol is HTTPS for secure data transmission.
- Customize Payload Format:
- Ensure that the payload includes key fields like "status", "pipeline", "task", "timestamp", and others as shown in the example payloads.
- Example Payload Setup:
json { "pipeline": { "id": "$PIPELINE_ID", "status": "$STATUS", "task": "$TASK_NAME", "timestamp": "$TIMESTAMP", "description": "$DESCRIPTION" } }
- Placeholder Explanation:
- "$STATUS": Replaces with the status of the pipeline (e.g., Succeeded, Failed).
- "$PIPELINE_ID": A unique identifier for the pipeline.
- "$TASK_NAME": The specific task within the pipeline.
- "$DESCRIPTION": A descriptive message of the pipeline's progress.
- "$TIMESTAMP": The time the pipeline task was executed.
- Test the Webhook Configuration:
- Before activating the webhook, perform a test to ensure that the JSON payload is correctly formatted and is being sent to the Callgoose SQIBS API endpoint as expected.
- Review the payload in Callgoose SQIBS to confirm that it matches the expected structure.
3.3 Finalizing and Testing
- Save and Activate the Pipeline:
- Once the pipeline and webhook are correctly configured, save the configuration and activate it.
- Validate the Integration:
- Trigger the pipeline manually if possible to verify that the correct JSON payload is sent to Callgoose SQIBS.
- Resolve any errors or issues with the pipeline to ensure the resolved state payload is also correctly sent and processed.
3.4 Additional Considerations
- Permissions: Ensure that the webhook has the necessary permissions to send alerts to the Callgoose SQIBS API endpoint.
- Security: Implement security measures such as HTTPS and API tokens to protect the data being transmitted between OpenShift Pipelines and Callgoose SQIBS.
- Logging and Debugging: Use the debugging and logging features in Callgoose SQIBS to monitor incoming payloads and troubleshoot any issues with the integration.
4. Configuring Callgoose SQIBS
4.1 Create API Filters in Callgoose SQIBS
To correctly map incidents from the OpenShift Pipelines alerts, you need to create API filters based on the JSON payloads received.
4.1.1 Example JSON Payloads from OpenShift Pipelines
- Pipeline Triggered (status: "Failed")
json { "pipeline": { "id": "pipeline123", "status": "Failed", "task": "Build", "description": "Build failed due to missing dependencies.", "timestamp": "2024-08-05T12:00:00.000Z" } }
- Pipeline Resolved (status: "Succeeded")
json { "pipeline": { "id": "pipeline123", "status": "Succeeded", "task": "Deploy", "description": "Pipeline completed successfully.", "timestamp": "2024-08-05T12:30:00.000Z" } }
4.2 Configuring API Filters
4.2.1 Integration Templates
If you see an OpenShift Pipelines integration template in the "Select Integration Template" dropdown in the API filter settings, you can use it to automatically add the necessary Trigger and Resolve filters along with other values. The values added by the template can be modified to customize the integration according to your requirements.
4.2.2 Manually Add/Edit the Filter
- Trigger Filter (For Creating Incidents):
- Payload JSON Key: "status"
- Key Value Contains: [Failed, Error]
- Map Incident With: "pipeline.id"
- This corresponds to the unique "pipeline.id" from the OpenShift Pipelines payload.
- Incident Title From: "pipeline.task"
- This will use the pipeline task name as the incident title in Callgoose SQIBS.
- Incident Description From: Leave this empty unless you want to use a specific key-value from the JSON payload. If a key is entered, only the value for that key will be used as the Incident Description instead of the full JSON. By default, the Incident Description will include the full JSON values.
- Example: If you use the "description" key in the Incident Description From field, the incident description will be the value of the "description" key. In the example JSON payload provided earlier, this would result in a description like "Build failed due to missing dependencies.".
- Resolve Filter (For Resolving Incidents):
- Payload JSON Key: "status"
- Key Value Contains: [Succeeded]
- Incident Mapped With: "pipeline.id"
- This ensures the incident tied to the specific "pipeline.id" is resolved when the alert status returns to normal.
Refer to the API Filter Instructions and FAQ for more details.
4.3 Finalizing Setup
- Save the API Filters:
- Ensure that the filters are correctly configured and saved in Callgoose SQIBS.
- Double-check that all key mappings, incident titles, and descriptions are correctly aligned with the payload structure sent by OpenShift Pipelines.
- Test the Integration:
- Manually trigger a pipeline in Red Hat OpenShift Pipelines to test if incidents are created in Callgoose SQIBS.
- Verify that the incident appears in Callgoose SQIBS with the correct title, description, and mapped values.
- Resolve the pipeline task in OpenShift Pipelines and ensure that the corresponding incident in Callgoose SQIBS is marked as resolved.
- Review and Adjust:
- If the incidents are not created or resolved as expected, review the API logs in Callgoose SQIBS and adjust the API filters accordingly.
- Use the debugging features in Callgoose SQIBS to monitor the incoming payloads and troubleshoot any issues.
5. Testing and Validation
5.1 Triggering Pipelines
- Simulate a Pipeline Failure:
- Intentionally cause a pipeline failure in OpenShift Pipelines to verify that an incident is created in Callgoose SQIBS with the correct information.
- Check the incident details in Callgoose SQIBS to ensure that the correct pipeline task and status are reflected.
5.2 Resolving Pipelines
- Acknowledge and Resolve the Pipeline Task:
- Once the pipeline task is resolved in OpenShift Pipelines, verify that the incident in Callgoose SQIBS is automatically marked as resolved.
6. Security Considerations
- API Security: Ensure that the Callgoose SQIBS API endpoint is correctly configured and that the API token is securely stored and used.
- OpenShift Pipelines Permissions: Confirm that the webhook in OpenShift Pipelines has appropriate permissions to send alerts and data to Callgoose SQIBS.
- Data Encryption: Ensure that the transmission of data between OpenShift Pipelines and Callgoose SQIBS is encrypted, especially if sensitive information is involved.
7. Troubleshooting
- No Incident Created: If no incident is created, verify that the webhook URL in OpenShift Pipelines is correct and that the JSON payload structure matches the API filters configured in Callgoose SQIBS.
- Incident Not Resolved: Ensure that the resolve filter in Callgoose SQIBS is correctly configured and that the JSON payload sent by OpenShift Pipelines matches the expected structure.
8. Conclusion
This guide provides a comprehensive overview of how to integrate Red Hat OpenShift Pipelines with Callgoose SQIBS for effective incident management. By following the steps outlined, you can ensure that alerts from OpenShift Pipelines are automatically reflected as incidents in Callgoose SQIBS, with proper resolution tracking when the issues are resolved.
For further customization or advanced use cases, refer to the official documentation for both Red Hat OpenShift Pipelines and Callgoose SQIBS:
- Red Hat OpenShift Pipelines Documentation
- Callgoose SQIBS API Token Documentation
- Callgoose SQIBS API Endpoint Documentation
- API Filter Instructions and FAQ
- How to Send API
This documentation will guide you through the integration process, ensuring that your incidents are managed effectively within Callgoose SQIBS based on real-time alerts from Red Hat OpenShift Pipelines.