keras-adf
: Assumed Density Filtering (ADF) Probabilistic Neural Networks¶
Release v19.1.0 (What's new?).
keras-adf
provides implementations for probabilistic
Tensorflow/Keras neural network layers,
which are based on assumed density filtering.
Assumed density filtering (ADF) is a general concept from Bayesian inference, but in the case of feed-forward neural networks that we consider here
it is a way to approximately propagate a random distribution through the neural network.
The layers in this package have the same names and arguments as their corresponding
Keras version. We use Gaussian distributions for our ADF approximations, which are
described by their means and (co-)variances. So unlike the standard Keras layers,
each keras-adf
layer takes two inputs and produces two outputs (one for the means
and one for the (co-)variances).
Getting Started¶
keras-adf
is a Python package hosted on PyPI.
It is intended to be used as part of the Tensorflow/Keras framework.
The recommended installation method is pip-installing
into a virtual environment.
$ pip install keras-adf
The next three steps should bring you up and running in no time:
The Overview section will show you a simple example of
keras-adf
in action and introduce you to its core ideas.The Examples section will give you a comprehensive tour of
keras-adf
’s features. After reading, you will know about our advanced features and how to use them.The API Reference reference is a quick way to look up details of all features and their options.
If at any point you get confused by some terminology, please check out our Glossary.
Project Information¶
keras-adf
is released under the MIT license,
its documentation lives at Read the Docs,
the code on GitHub,
and the latest release can be found on PyPI.
It’s tested on Python 2.7 and 3.4+.
If you’d like to contribute to keras-adf
you’re most welcome.
We have written a short guide to help you get you started!
Further Reading¶
Additional information on the algorithmic aspects of keras-adf
can be found
in the following works:
Jochen Gast, Stefan Roth, “Lightweight Probabilistic Deep Networks”, 2018
Jan Macdonald, Stephan Wäldchen, Sascha Hauch, Gitta Kutyniok, “A Rate-Distortion Framework for Explaining Neural Network Decisions”, 2019