Different people learn different things in different ways. Some people learn by listening to lectures and teachers, some by reading and self-study, and some by either doing or trying out things for themselves or in an apprenticeship sort of a setting.

Children mostly learn by either listening or doing things for themselves since their reading comprehension faculties would not have developed sufficiently. It is probably around the age of 12–13 when reading comprehension of most kids develops sufficiently to enable self-study. So kids who previously found it hard to listen and learn from their teachers in a classroom setting, begin…

Amazon.com swears by “Working Backwards” as a tenet. And it is a great tenet that has worked well for the company. The work culture is such that even all of its employees adhere to it as a principle.

Working at Amazon, I too got to learn the benefits of this principle. It is an excellent rule to follow for any consumer facing product/company like Amazon. You keep “Customer Obsession” or customer stories as your north star and “work backwards” from there; you let it guide you to indicate in which direction to move. …

I recently watched this talk by Zachary Lipton. The talk is two years old but still relevant, important and informative.

Here are few things that stuck out for me on this matter -

  • Excitement + Ignorance = Misinformation
  • AI/ML is currently in the same phase that the dot com boom was in the 90s. Back then anyone who could get a website registered was a software expert. And now anyone can become
    – a self appointed AI authority or influencer, with limited or no technical expertise.
    – or can get into positions of authority by throwing money at the problem.

In many Machine Learning or Data Science workflows it is common to save and checkpoint several models and compare how they perform on a held-out validation dataset. These saved models could be -

  • checkpoints in a single training trial saved at different time epochs, in which case the network structure would be the same but the weights and parameters of the network would be different for the different checkpoints. Checkpointing in this fashion allows us to compare and contrast the training and validation accuracy across the different saved models in different time epochs.
  • Or they could be different training trials…

In this post I will share few of my thoughts on explainability in AI/ML.

Motivating Stories

These situations ought to explain why explainability in AI is important.

  • When a human is riding a self-driving taxi, and the taxi takes a sudden unexpected detour, you become suspicious and probably scared, wondering if the car is malfunctioning or has it been hacked and are you being taken to a different place. The AI agent driving the taxi will need to give an explanation — something like, the road ahead has been blocked, or there is some demonstration on a street ahead etc.. and then…

Authors: Anirudh Acharya, Rajan Singh , Roshani Nagmote, Hagay Lupesko — Amazon AI software engineers.

MXNet 1.2 adds built-in support for ONNX

With the latest Apache MXNet 1.2 release, MXNet users can now use a built-in API to import ONNX models into MXNet. With this new functionality, developers can import models created with other neural network frameworks into MXNet and use it for inference or to fine-tune the model.

What is ONNX?

Open Neural Network Exchange (ONNX) is an open source serialization format to encode deep learning models. ONNX defines the format for the neural network computational graph and an extensive list of operators often used in neural network architectures…

Anirudh Acharya

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store