HomeCAI Release NotesCascadeo AI v3 Release Notes – January 2024

Year of Cascadeo’s directional pull towards AI and Beyond

In 2023, as the platform team aggressively pushed towards Generative AI innovations. We’re now welcoming the rest of the company to learn and transition towards the world of AI and beyond.

Cascadeo AI x Open AI integration (Early days)

Direct API call: What started as a quick POC back in April of 2023, has now progressed to multiple experiments with Generative AI.

Prompt: You are a DEVOPS analyst and I just received this alert from my monitoring solution about my AWS environment. Please interpret this data and provide possible next steps – for instance how to evaluate the criticality of this event, how to validate the event. Please also include 5 possible remediation steps:Here is the alert:

Open AI Logo

“Based on the alert notes, this event seems to be a warning threshold, which has been set to >5% with the expectation that it will fire once per day. However, if the CPU utilization is >85%, it should be taken more seriously. In this case, the CPU utilization has exceeded the threshold of 90%, indicating that the event is critical and requires immediate attention.

To validate the event, you can check the CPU utilization metric of the affected instance using the AWS CloudWatch console or the AWS CLI. Additionally, you can review any recent changes made to the instance, such as an increase in traffic or a software update that may be causing the CPU spike.
Here are five possible remediation steps you can take to address this issue:

  1. Identify the root cause of the high CPU utilization and take appropriate action, such as optimizing queries or tuning application settings.
  2. Increase the instance size or add more instances to handle the increased load.
  3. Scale out your application by using a load balancer and multiple instances to distribute the traffic and reduce the load on each instance.
  4. Review the resource allocation of the affected instance and ensure that it has enough CPU and memory resources to handle the current workload.
  5. Implement an autoscaling policy that automatically adjusts the number of instances based on CPU utilization, ensuring that your environment can handle spikes in traffic or usage.”

“Garbage In, Garbage Out”

As we knew we were already pursuing more of the AI space beyond anomaly detections with our monitoring services, we we’re also familiar with the age old concept of “Garbage In, Garbage Out”. With this, before progressing, the team together with the professional services team had to prepare and better understand all the data that was flowing through the platform.

**Note that more AWS services were used to build the lake formation

AWS Lake Formation Logo
AWS Glue Logo
Amazon Athena Logo

Transition to AWS Bedrock

With the help of our partners at AWS, we were able to receive early access to AWS Bedrock just as we were completing our lake formation experiments and implementation

Amazon Bedrock Logo

Initiating Customer-Centric Monthly Summaries 

Our recent strides in harnessing the capabilities of gen AI have led us to establishing the groundwork for automatically generated month-end summarized reports. These reports are poised to provide our valued customers with a comprehensive overview of their activities, facilitated by the summarization capabilities of gen AI. This aims to elevate customer engagement, facilitating proactive communication on monthly activities and fostering a deeper understanding of their cloud operations.

Prompt: “Act as a managed service support engineer monitoring a customer’s cloud services.

Given a set of alerts we received, please send a 2 paragraph summary explaining to a client stakeholder the alerts that transpired. Also recommend steps to mitigate, or lessen these alerts.

Please specify the given timeframe included in the alerts received. The report should be in third person and professional.

Start with the phrase: “Based on the events we’ve received for integration from…”

End the chain when a summary report has been generated. Final answer should always be the complete summary report.

Do not reply to refer to summary report. Do not include any references to context.”

Report output based on the prompt at left.
Amazon TimeStream Logo
LangChain Logo

Coming Full Circle

With AI, not only were we operationalizing the process, we also wanted a way to remediate, improve, and consistently grow our operations and being true to the world of AI, let it grow and learn within its own capacity. For our latest experiment, we wanted our AI to learn, internalize, and think for itself. (Remediation still through human intervention)

Prompt: If you (Claude) were the customer, how would you rate your last evaluation response, and how would you improve this for future iteration?

Note that this feature is still on its early days and has not been fully utilized yet by the platform team.

Learning Resource: RAG through LangChain

Response:

Thank you for the feedback. If I were the customer receiving this email summary and recommendations, here is how I would rate and improve it:

Rating:

  • Completeness – 4/5. The email provided a good high-level overview of the different services and resources in my AWS environment. The inclusion of specific metrics like instance counts, bucket counts etc. is useful.
  • Usefulness – 3/5. The recommendations provided are reasonable from a security and cost optimization standpoint. However, they lack details on how specifically to implement them in my environment.
  • Tone – 5/5. The email has a polite and professional tone appropriate for a customer-facing message.

Improvements:

  • Provide more specific and actionable steps on how to implement the recommendations. For example, instead of just saying “Enable S3 bucket encryption”, give instructions or point to AWS docs on the exact steps to enable default encryption on a bucket.
  • Prioritize the top 3-5 most important recommendations that would provide the biggest security/cost benefits if implemented. Too many recommendations may be overwhelming.
  • Offer to schedule a follow-up discussion to walk through the recommendations in more detail and answer any questions.
  • Provide links to AWS services, docs or blogs for more info on the recommendations.
  • Offer an executive summary slide deck as an optional visual summary to accompany the email.
  • Ask the customer for feedback on the usefulness of the recommendations and if any key areas were missed. Use that to improve future iterations.

In summary, the email provides a solid starting point but could be improved by having more actionable and detailed recommendations, focusing on priority areas, and following up with the customer to refine it based on their feedback. The goal should be an email that makes it as easy as possible for the customer to understand and implement your recommendations.

With our continued integration of AI-generated content, we prioritize customer awareness by now clearly indicating sections created by AI. This transparency aims to foster understanding and trust, ensuring meticulous consideration where AI contributes to content generation.

Screenshot of a notice that reads: This feature is in beta release, and this message has been generated by Claude via AWS Bedrock. Please review your infrastructure with your administrator before committing to any infrastructure changes, or contact us at support@cascadeo.com for any concerns or clarifications. Thank you!