Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests.
case studies
See More Case Studies
Wi-Fi HaLow: Hands on with AsiaRF’s IoT network gateway
AsiaRF Although this model doesn’t have a weather-proof enclosure, AsiaRF does have an outdoor gateway model. Plus, they offer different antennas if you want a
5 ways data scientists can prepare now for genAI transformation
For example, Salesforce recently announced Industries AI, a set of pre-built customizable AI capabilities that address industry-specific challenges across 15 industries, including automotive, financial services,
MITRE Launches AI Incident Sharing Initiative
MITRE’s Center for Threat-Informed Defense announced the launch of the AI Incident Sharing initiative this week, a collaboration with more than 15 companies to increase
Contact us
Partner with Us for Comprehensive IT
We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.
Your benefits:
- Client-oriented
- Independent
- Competent
- Results-driven
- Problem-solving
- Transparent
What happens next?
1
We Schedule a call at your convenience
2
We do a discovery and consulting meting
3
We prepare a proposal