Artificial intelligence (AI) has made significant advances in the last five years. This is especially true in domains like picture recognition, natural language understanding, and board games like Go. In areas like healthcare and finance, AI augments essential human decisions. It’s more important than ever to design reliable and unbiased machine learning models to fuel these AI systems.
Machine learning models
It is essential to understand complicated machine learning models used in the real world. Models can propagate bias because of a lack of deep understanding. This is seen in criminal justice, politics, retail, facial recognition, and language understanding. And lack of understanding hurts trust. Improving machine learning models’ transparency and interpretability decreases errors, reduces unintended bias, and boosts confidence in the results.
A model’s interpretability depends on knowing how it makes predictions, how it varies with input parameters, and when it makes a mistake. Interpretable AI thus develops cutting-edge machine learning algorithms. It creates machine learning systems that combine interpretability with performance.
A one-of-a-kind-book on Interpretable AI
Until the publication of “Interpretable AI”, there were practically no single resources or books that covered implementation of these cutting-edge interpretability techniques. Dr. Ajay Thampi, a machine learning engineer who specializes in responsible AI and fairness, has written this one-of-a-kind book.
Interpretable AI teaches you how to recognise patterns in your model’s data and why it produces the results it does. As you read, you’ll learn algorithm-specific ways like interpreting regression and generalised additive models, as well as how to improve training performance. You’ll also learn how to interpret advanced deep learning models with difficult-to-observe processes. The topic of AI transparency is quickly growing, and this book distils cutting-edge research into practical Python methods.
This book will show you how to look into “black box” models, construct accountable algorithms, and determine what causes skewed results. Also included techniques for decoding AI models, preventing errors due to bias, data leakage, and idea drift, testing fairness and bias reduction, and designing GDPR-compliant AI systems.
Pablo Roccatagliata, Professor at Universidad Torcuato Di Tella explained the book as “A sound introduction for practitioners to the exciting field of interpretable AI.”
Cutting-edge interpretability strategies
With Interpretable AI, you can implement cutting-edge interpretability strategies for complex machine learning models and develop fair and explainable AI systems. Only a few resources and practical guides cover all the important techniques that would be useful for practitioners in the real world. This book seeks to fill that need.
This book also seeks to fill that gap by first establishing a framework for this popular area of research and then presenting a wide range of interpretability strategies. The book gives specific real-world examples throughout to show how to develop complicated models and interpret them using cutting-edge methodologies.
Is this a book that everyone should read?
Data scientists and engineers that want to learn more about how their models work and how to design fair and impartial algorithms would benefit from Interpretable AI. The book should also be valuable for architects and business stakeholders who want to understand the models that underpin AI systems so that they can assure fairness and safeguard their users and brand.
Before you read this book, what do you need to know?
Python-experienced data scientists and engineers will benefit most from this book. In addition to NumPy, Pandas, Matplotlib, and Scikit-Learn knowledge, basic familiarity with Python data science libraries would be helpful. The book will teach you how to load and represent data using these libraries. But it will not provide you with an in-depth understanding of them because that is outside the scope of this book.
Although this book does not focus on the mathematics behind model interpretability, data scientists and engineers interested in creating machine learning models should have this basic mathematical foundation.
Though not a strict need, having a basic understanding of machine learning or actual experience training machine learning models is a benefit. This book does not go into great detail about machine learning because there are several resources and books that do so. The book, on the other hand, will provide you with a basic understanding of the machine learning models in use, as well as instructions on how to train and evaluate them. It focuses on interpretability theory and approaches to interpreting trained models.
The reader should be familiar with linear algebra, particularly vectors and matrices, as well as operations like dot product, matrix multiplication, transpose, and inversion. The reader must also have a good foundation in probability theory and statistics, specifically on the topics of random variables, basic discrete and continuous probability distributions, conditional probability, and Bayes’ theorem. It is also necessary to have a basic understanding of calculus, including single-variable and multivariate functions, as well as derivatives (gradients) and partial derivatives, among other things.
The fundamentals of interpretability
It will provide you with an introduction to the field of interpretable AI. You’ll learn about the many types of AI systems, the necessity of interpretability, white-box and black-box models, and how to design interpretable AI systems.
Moreover, you will learn why white-box models are transparent and black-box ones are opaque. You’ll start by learning how to analyse simple white-box models like linear regression and decision trees, before moving on to generalised additive models (GAMs). You’ll also learn how to analyse the traits that give GAMs their great prediction potential. GAMs have a high predictive potential. They are also very easy to interpret, so you get more value for the money when you use them.
A step-by-step guide
This book provides a step-by-step guide to building interpretable AI systems. It explains why interpretability is important and build the groundwork for the rest of the book by using specific examples.
“Concrete examples help the understanding and building of interpretable AI systems”, said Izhar Haq Director, School of Professional Accountancy at Long Island University.
The book’s primary focus
This book largely focuses on supervised learning systems where labelled data is present. You can learn how to apply interpretability techniques to regression and classification problems. It also emphasises the interpretation phase of the understanding and explaining stages. It also teaches you several interpretability strategies that you may use to answer the crucial how question. And solve data leakage, bias, and regulatory noncompliance issues.
What tools are included in the book?
Written in Python, this book describes models and interpretability techniques.
There are many cutting-edge interpretability approaches developed and produced using Python, as it is commonly used. The accompanying diagram shows the tools used.
Fair and unbiased models
The book examines the prerequisites for building explainable AI systems as well as how to build fair and unbiased models. You’ll learn about various fairness definitions and how to assess whether your model contains biases in Chapter 8. You’ll also learn how to use a neutralising approach to prevent bias.
You will have various interpretability strategies in your toolkit by the time you reach the end of this book. Unfortunately, when it comes to model comprehension, there is no silver bullet. There is no one-size-fits-all interpretability technique that can apply to all situations. As a result, you’ll need to look at the model through many lenses and use a range of interpretability techniques.
Code snippets that can be executed
Discussion forum for Interpretable AI liveBook
Free access to liveBook, Manning’s online reading platform, is included with the purchase of Interpretable AI. You can add comments to the book as a whole or to select sections or paragraphs. For this you can use liveBook’s proprietary conversation features. Making notes, asking and answering technical issues, and receiving help from the author and other users are a breeze. Manning’s forums have a section for discussing interpretable AI, ans information about the forums and the norms of conduct.