Home Content News New Open Source Library For Measuring Model Uncertainty From Amazon

New Open Source Library For Measuring Model Uncertainty From Amazon

0
547

A variety of Bayesian inference techniques are also supported by Amazon Fortuna and can be used to train deep neural networks starting from Flax models.

With the help of the library, practitioners will be able to create robust and trustworthy AI solutions by utilising cutting-edge uncertainty quantification methods.

Fortuna, an open source toolbox for ML model uncertainty assessment, has been made broadly available, according to an announcement from AWS. Any trained neural network can generate calibrated uncertainty estimates using the calibration techniques provided by Fortuna, such as conformal prediction.

There are several published ways for calculating or calibrating the uncertainty of predictions, but the present tools and libraries for doing so have a narrow scope and do not offer a wide variety of methods. It is challenging to incorporate uncertainty into production systems because of the high overhead associated with this. By combining well-known strategies and making them accessible to users through a standardised and user-friendly interface, Fortuna fills in this gap.

Fortuna has three alternative usage modes, starting with uncertainty estimates. This mode is the quickest level of interaction with the library and has the fewest compatibility requirements. This usage mode provides conformal prediction techniques for both regression and classification.

Assuming a model has already been trained in some framework, starting from model outputs brings model outputs to Fortuna. In this mode of operation, users can compute metrics, derive conformal sets, assess uncertainty, and calibrate model outputs.

Scikit-learn, an open source machine learning library for Python, is another popular library for calculating model uncertainty. It has cross-validation and bootstrapping tools, as well as ensemble model construction assistance. Tools for estimating uncertainty are also provided by TensorFlow Probability. It is based on TensorFlow and supports Monte Carlo methods, Bayesian neural networks, and PyMC3, a probabilistic programming toolkit that enables users to create Bayesian models using a high-level programming interface.

An accurate assessment of the anticipated uncertainty is necessary for applications that demand the ability to make key decisions. When there is uncertainty, one might assess the accuracy of model predictions, rely on human judgement, or decide if a model can be used safely.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here