Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests.
case studies
See More Case Studies
Threats delivered over encrypted channels continue to rise
Encryption is the default for online communication, with nearly all web traffic protected by secure protocols like TLS/SSL. Yet, as encryption becomes more ubiquitous by
Cisco Turns 40 and Redefines its Purpose Goals
Cisco turns 40 this month and is celebrating by doing what Cisco has always done: renewing its commitment to innovation, inclusion, and sustainability. I’ve been
Lazarus Group Spotted Targeting Nuclear Engineers with CookiePlus Malware
The Lazarus Group, an infamous threat actor linked to the Democratic People’s Republic of Korea (DPRK), has been observed leveraging a “complex infection chain” targeting
Contact us
Partner with Us for Comprehensive IT
We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.
Your benefits:
- Client-oriented
- Independent
- Competent
- Results-driven
- Problem-solving
- Transparent
What happens next?
1
We Schedule a call at your convenience
2
We do a discovery and consulting meting
3
We prepare a proposal