Tech

Report: 37% of ML leaders say they don’t have the data needed to improve model performance


We’re excited to bring Transform 2022 live back on July 19 and virtually from July 20 – 28. Join AI and data leaders for insightful conversations and Interesting networking opportunities. Sign up today!


A new report by AI scale discover what’s working and what’s not with AI implementations and best practices for ML teams to move from testing-only to real-world implementation. The report explores every stage of ML . life cycle – from data collection and annotation to model development, deployment and monitoring – to understand where AI innovation is bottlenecking, where problems are occurring, and what approaches are helping companies find success.

The goal of the report is to further demystify the realities of what it takes to harness the full potential of AI for every business and help empower organizations and ML practitioners to remove barriers. their current barriers, learn and implement best practices, and ultimately use AI to their strategic advantage.

For ML students, data quality is one of the most important factors leading to their success, and according to respondents, it is also the most difficult challenge to overcome. In this study, more than a third (37%) of the total respondents said they did not have the necessary data types to improve model performance. Not only do they not have a lot of data, but quality is also an issue – only 9% of respondents said their training data is free of noise, bias, and deficiencies.

The majority of respondents had problems with their training data. The top three problems are data noise (67%), data bias (47%), and domain spacing (47%).

Most teams, regardless of industry or level of AI advancement, face similar challenges in terms of data quality and variety. Scale data suggests that working closely with annotation partners can help ML teams overcome annotation quality and data management challenges, speeding up model deployment. ML teams that did not engage with annotation partners at all were more likely to take more than three months to obtain annotation data.

This survey was conducted online in the US using Scale AI from March 31, 2022 to April 12, 2022. Over 1,300 ML practitioners including those from Meta, Amazon, Spotify and more were surveyed for the report.

Read Full report according to the AI ​​Scale.

VentureBeat’s mission is a digital city square for technical decision-makers to gain knowledge of transformative enterprise technology and transactions. Learn more about membership.



Source link

kignews

Kig News: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button