Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests.
case studies
See More Case Studies
Malicious Go Package Exploits Module Mirror Caching for Persistent Remote Access
Feb 04, 2025Ravie LakshmananVulnerability / Threat Intelligence Cybersecurity researchers have called attention to a software supply chain attack targeting the Go ecosystem that involves a
This versatile Microsoft laptop is perfect for work and travel – and it’s still $500 off
ZDNET’s key takeaways Prices for the 2024 Microsoft Surface Laptop start at $882 It’s a well-designed Copilot+ PC with a light, stylish form factor, and
Qualys TotalAppSec enables organizations to address risks across web applications and APIs
Qualys announced TotalAppSec, its new AI-powered application risk management solution to enable organizations to monitor and mitigate cyber risk from critical web applications and APIs.
Contact us
Partner with Us for Comprehensive IT
We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.
Your benefits:
- Client-oriented
- Independent
- Competent
- Results-driven
- Problem-solving
- Transparent
What happens next?
1
We Schedule a call at your convenience
2
We do a discovery and consulting meting
3
We prepare a proposal