Преглед

  • Дата на основаване октомври 2, 1985
  • Сектори Фармация
  • Публикувани работни места 0
  • Разгледано 11

Описание на компанията

Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy

Machine-learning models can fail when they attempt to make forecasts for individuals who were underrepresented in the datasets they were trained on.

For example, a model that forecasts the very best treatment choice for someone with a chronic disease might be trained utilizing a dataset that contains mainly male clients. That design may make incorrect forecasts for female clients when deployed in a healthcare facility.

To enhance outcomes, engineers can try stabilizing the training dataset by eliminating information points until all subgroups are represented equally. While dataset balancing is promising, it often needs getting rid of big amount of information, harming the model’s overall performance.

MIT researchers developed a brand-new strategy that identifies and gets rid of specific points in a training dataset that contribute most to a design’s failures on minority subgroups. By getting rid of far fewer datapoints than other techniques, this method maintains the total accuracy of the design while enhancing its performance relating to underrepresented groups.

In addition, the technique can recognize concealed sources of bias in a training dataset that lacks labels. Unlabeled information are much more common than labeled information for lots of applications.

This approach might likewise be integrated with other techniques to improve the fairness of machine-learning models deployed in high-stakes scenarios. For instance, it may at some point assist make sure underrepresented clients aren’t misdiagnosed due to a biased AI model.

„Many other algorithms that try to address this problem presume each datapoint matters as much as every other datapoint. In this paper, we are showing that assumption is not true. There specify points in our dataset that are contributing to this predisposition, and we can find those information points, eliminate them, and get much better performance,“ says Kimia Hamidieh, an electrical engineering and computer system science (EECS) graduate trainee at MIT and co-lead author of a paper on this technique.

She wrote the paper with co-lead authors Saachi Jain PhD ’24 and fellow EECS graduate trainee Kristian Georgiev; Andrew Ilyas MEng ’18, PhD ’23, a Stein Fellow at Stanford University; and senior authors Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Details and Decision Systems, and Aleksander Madry, the Cadence Design Systems Professor at MIT. The research study will exist at the Conference on Neural Details Processing Systems.

Removing bad examples

Often, machine-learning designs are trained utilizing substantial datasets collected from many sources throughout the web. These datasets are far too big to be thoroughly curated by hand, so they may contain bad examples that hurt design performance.

Scientists likewise know that some data points impact a model’s efficiency on certain downstream tasks more than others.

The MIT researchers combined these two ideas into a technique that identifies and eliminates these problematic datapoints. They seek to solve an issue referred to as worst-group mistake, which takes place when a model underperforms on minority subgroups in a training dataset.

The researchers’ brand-new strategy is driven by previous operate in which they presented a technique, called TRAK, that identifies the most essential training examples for a specific model output.

For this new method, demo.qkseo.in they take incorrect forecasts the model made about minority subgroups and utilize TRAK to recognize which training examples contributed the most to that incorrect prediction.

„By aggregating this details across bad test predictions in the proper way, we are able to discover the specific parts of the training that are driving worst-group accuracy down overall,“ Ilyas explains.

Then they get rid of those specific samples and retrain the design on the remaining information.

Since having more data usually yields better total performance, getting rid of just the samples that drive worst-group failures maintains the design’s total precision while improving its efficiency on minority subgroups.

A more available technique

Across three machine-learning datasets, their method surpassed multiple strategies. In one circumstances, it improved worst-group accuracy while getting rid of about 20,000 fewer training samples than a standard information balancing technique. Their technique also attained greater accuracy than methods that need making changes to the inner functions of a design.

Because the MIT method includes changing a dataset instead, it would be simpler for a professional to use and can be applied to many kinds of designs.

It can also be made use of when predisposition is unidentified due to the fact that subgroups in a training dataset are not identified. By identifying datapoints that most to a feature the model is finding out, they can understand the variables it is utilizing to make a prediction.

„This is a tool anybody can use when they are training a machine-learning model. They can take a look at those datapoints and see whether they are aligned with the capability they are attempting to teach the design,“ says Hamidieh.

Using the method to discover unknown subgroup predisposition would need instinct about which groups to try to find, so the scientists hope to validate it and explore it more fully through future human research studies.

They likewise want to improve the performance and dependability of their strategy and make sure the approach is available and easy-to-use for professionals who might one day deploy it in real-world environments.

„When you have tools that let you seriously look at the data and determine which datapoints are going to lead to predisposition or other unfavorable habits, it provides you an initial step toward building models that are going to be more fair and more reputable,“ Ilyas says.

This work is funded, in part, by the National Science Foundation and the U.S. Defense Advanced Research Projects Agency.

„Проектиране и разработка на софтуерни платформи - кариерен център със система за проследяване реализацията на завършилите студенти и обща информационна мрежа на кариерните центрове по проект BG05M2ОP001-2.016-0022 „Модернизация на висшето образование по устойчиво използване на природните ресурси в България“, финансиран от Оперативна програма „Наука и образование за интелигентен растеж“, съфинансирана от Европейския съюз чрез Европейските структурни и инвестиционни фондове."

LTU Sofia

Отговаряме бързо!

Здравейте, Добре дошли в сайта. Моля, натиснете бутона по-долу, за да се свържите с нас през Viber.