Warning: The magic method NinjaFormsAddonManager\WordPress\Plugin::__wakeup() must have public visibility in /www/frowarecom_769/public/current/web/app/plugins/ninja-forms-addon-manager/lib/wordpress/plugin.php on line 22

Deprecated: Creation of dynamic property NinjaFormsAddonManager\Plugin::$service is deprecated in /www/frowarecom_769/public/current/web/app/plugins/ninja-forms-addon-manager/includes/plugin.php on line 19

Deprecated: Creation of dynamic property SearchAndFilter::$frmqreserved is deprecated in /www/frowarecom_769/public/current/web/app/plugins/search-filter/search-filter.php on line 71

Deprecated: Creation of dynamic property Tribe__Events__Community__PUE::$pue_instance is deprecated in /www/frowarecom_769/public/current/web/app/plugins/the-events-calendar-community-events/src/Tribe/PUE.php on line 47

Deprecated: Creation of dynamic property Tribe__Events__Community__Main::$eventListDateFormat is deprecated in /www/frowarecom_769/public/current/web/app/plugins/the-events-calendar-community-events/src/Tribe/Main.php on line 305

Deprecated: Creation of dynamic property Tribe__Events__Community__Main::$users_can_create is deprecated in /www/frowarecom_769/public/current/web/app/plugins/the-events-calendar-community-events/src/Tribe/Main.php on line 313

Deprecated: Creation of dynamic property Tribe__Events__Community__Main::$emailAlertsEnabled is deprecated in /www/frowarecom_769/public/current/web/app/plugins/the-events-calendar-community-events/src/Tribe/Main.php on line 316

Deprecated: Creation of dynamic property Tribe__Events__Community__Main::$emailAlertsList is deprecated in /www/frowarecom_769/public/current/web/app/plugins/the-events-calendar-community-events/src/Tribe/Main.php on line 319

Deprecated: Creation of dynamic property Tribe__Events__Community__Main::$blockRolesFromAdmin is deprecated in /www/frowarecom_769/public/current/web/app/plugins/the-events-calendar-community-events/src/Tribe/Main.php on line 321

Deprecated: Creation of dynamic property Tribe__Events__Community__Main::$blockRolesList is deprecated in /www/frowarecom_769/public/current/web/app/plugins/the-events-calendar-community-events/src/Tribe/Main.php on line 322

Deprecated: Implicit conversion from float 11.5 to int loses precision in /www/frowarecom_769/public/current/web/wp/wp-includes/class-wp-hook.php on line 85

Deprecated: Implicit conversion from float 11.5 to int loses precision in /www/frowarecom_769/public/current/web/wp/wp-includes/class-wp-hook.php on line 87

Deprecated: Creation of dynamic property EAddonsForElementor\Plugin::$controls_manager is deprecated in /www/frowarecom_769/public/current/web/app/plugins/e-addons-for-elementor/core/plugin.php on line 175

Deprecated: Creation of dynamic property Kinsta\Cache_Purge::$kinsta_cache is deprecated in /www/frowarecom_769/public/current/web/app/mu-plugins/kinsta-mu-plugins/cache/class-cache-purge.php on line 84

Deprecated: Creation of dynamic property Kinsta\KMP::$wp_cli is deprecated in /www/frowarecom_769/public/current/web/app/mu-plugins/kinsta-mu-plugins/class-kmp.php on line 93

Deprecated: Use of "self" in callables is deprecated in /www/frowarecom_769/public/current/web/app/plugins/wp-discourse/lib/discourse.php on line 225
The landscape of machine learning interpretability | Frocentric Tech
Deprecated: Automatic conversion of false to array is deprecated in /www/frowarecom_769/public/current/web/app/plugins/ele-custom-skin/includes/enqueue-styles.php on line 22

The landscape of machine learning interpretability

The Landscape of machine learning interpretability

By Eva M

“The time is now” – a cheesy AI-generated advertising slogan or the title of a timeless noughties tune?

Both, as it turns out.

When Dixons Carphone sought a new strap line to attract shoppers to its Black Friday sale, the company turned to AI for the winning line “The time is now”. This catchphrase was the brainchild of proprietary natural language generation and deep learning models from Phrasee – an AI copywriting tool for digital marketing campaigns. While it would be easy to roll our eyes at the phrase as a model output, the system lived up to the rather iffy marketing adage: “study the past and avoid thinking that the world is any different”. It found a way to say “BUY NOW” that not only maximised opens, clicks and conversions in their email campaign but differed from all the candidate slogans from human copywriters.

While it’s certainly an impressive application, it underscores the ability of machine learning (ML) models to make accurate predictions based on the past. (Depending on the application and our perspective, this can be a great or terrible thing… and I’ll return to this that later) – For now, I want to emphasise the capability of ML models to solve problems in multi-dimensional space, detect nonlinear, faint or rare phenomena and make accurate predictions. These properties have made them increasingly ubiquitous in analytical spheres, helping to support rapid decision-making from the most critical applications (e.g. medical diagnoses or military operations) to the trivial conveniences that we’re increasingly accustomed to (cf your auto-curated Love Island twitter feed).

For the same dataset, it is possible for there to be multiple good models that produce similarly accurate predictions (Fig. 1). However, what makes these models so effective is precisely what makes them difficult to understand. Popular models such as artificial neural networks use a range of mathematical operations to combine and recombine input variables. For instance, if we look at the architecture of a simple artificial neural network (Fig 1a), we can see that the original input variables are combined in first one hidden layer and then a second hidden layer before finally making a prediction.

Figure 1. Two hypothetical neural networks are shown that take four inputs (e.g. credit history, income, credit score, savings) and make a prediction (e.g. whether or not to approve a loan). Both forms may have similar predictive accuracy but differences in their architecture could result in different explanations.

While these weighted complex combinations of data improve the accuracy of predictions, they also make it difficult to map a decision to a specific input variable. As Hall & Gill (2018) put it: “If you are rejected for a credit card, the lender doesn’t usually say it’s because the arctangent of a weighted, scaled combination of your debt-to-income ratio, your savings account balance, your ZIP code, your propensity to play tennis, your credit history length, and your credit score are equal to 0.57.

Even if you were in the minority of people who might consider this a reasonable and clear explanation, yet another problem lurks. For the same dataset, it is possible for there to be multiple good models that produce similarly accurate predictions. That means that even where we can infer meaning from a machine learning model, the details of the explanation might differ depending on the chosen model.

Taken together, these two factors underscore why building interpretable machine learning systems is fundamentally difficult. 

Why does this matter?

Like it or not, AI/ML systems have introduced radical changes to our lives and they constitute a societally and economically disruptive force. This has precipitated debates on the limits of data protection laws to continue to protect the freedoms and rights we enjoy. For instance, when GDPR was first introduced, there were suggestions that it would effectively legislate a “right to explanation” such that a user could ask for an explanation of an algorithmic decision that was made about them. More recent analyses suggest that even minimal human involvement could be enough to exempt organisations from this requirement. Nonetheless, this places the responsibility firmly on the industry to step up to the plate.

In highly-regulated industries such as banking, insurance and healthcare, analytical processes are subject to scrutiny and external validation, making applications of machine learning in these sectors rare as a result. Unfortunately, inspite of legal incentives and regulations, algorithms that directly or indirectly perpetuate discrimination are still commonplace today. Without any insight into the decision-making process of a machine learning model, or the key factors involved, it is difficult to identify the underlying errors and biases within the model and act quickly or preemptively to resolve them.

Aside from the social and legal imperative, understanding and trusting models and their results is a hallmark of good science. Without this, we lack any assurances as to how a model will behave in the wild or indeed, when modified inputs create unwanted or unpredictable decisions.

This might paint a grim picture for ML but, there is light at the end of the tunnel!

New techniques are emerging in ML interpretability that track causal effects of input variables on predictions, learn proxy models that are interpretable or, visualise the outputs for a human to see the predictions. As they continue to be integrated into ML pipelines, these tools can make it possible to detect bias or unexpected outputs during development, if not in real time. Even where these techniques fall short, statistical researchers continue to  make great strides towards simple, intelligible models that can perform just as well, if not better than ML models.

So all is not lost! 

All that remains is for the industry to take meaningful action to embed interpretability into ML pipelines. The number of Fairness, Accountability and Transparency teams forming and storming in big organisations is cause for a little hope but it remains to be seen whether their activity will be performative or affect real change. No matter the path, transparency is an increasingly attainable goal and would undoubtedly allow us to address the power differential between humans and AI systems and tackle the more challenging problems of fair and ethical AI.

 



The post The landscape of machine learning interpretability appeared first on Black In Data.