Numerical Differentiation in Coding: The Pythonic Way

Farhan Hai Khan
Geek Culture
Published in
5 min readMar 10, 2021

--

Have you had problems coding the differential value of a function f(x)? Do you need a functional approach that can automate differentiation for you? If the answer to either of these queries is a yes, then this blog post is definitely meant for you.

I have provided the Google Colaboratory Notebook and the Github Repository Num_Diff so you may follow along using these utilities for a better understanding of the topic.

Let’s start Numerical Differentiation in Python.

Why Numerical Differentiation?

Consider the following problem :

Sample Problem by Author

Before writing some formal code, import the math module in Python:

import math

Now, you might construct the objective function as follows:

def f(x):
return math.asin(math.sqrt(1-math.pow(x,2)))

For the differential function f’(x), a Typical Mathematician would thereafter go on to manually differentiate the function using lengthy and complex chain rules, that might look as follows and then code it:

Calculation for d/dx{f(x)} [Source : Online Derivative Calculator]
def differentialfx(x):
return (-x)/(abs(x) * math.sqrt(1-math.pow(x,2)))

Scary, eh? Well, we’re not here for that, we will try to automate the differentiation operator(d/dx) as a whole.

Let’s simplify maths using maths. (Source: KnowYourMeme)

Concept of Limits, Derivatives, and Approximations

The limit of a function at a point a in its domain (if it exists) is the value that the function approaches as its argument approaches a.

Limit of a Function

We will use the above definition to obtain the Left & Right Hand Derivatives, respectively as follows:

Definition of LHD and RHD

For better approximation often the average of the two is considered the best estimate of the differential at x=a. Mathematically, what we are trying to compute is:

Computed Average Derivative

Hence to get the differential value of f(x) at x=a [f’(x=a)] we will use the above formula, substituting a very small positive number {say 10^(-6) } for h. Let’s dive right into coding it :

def differentiate(func,a,h=1e-6):
"""
Returns a derivative of the passed function f(x) at a given value
a using the concept of Right Hand and Left Hand derivative
approximation.
f = univariate function f(x)
a = value at which derivative should be calculated f'(x=a)
h = the tolerance h, which is a value very close to zero but not
zero.
"""
return (func(a+h)-func(a-h))/(2*h)

Results with Verification

Now let us find out some differentials using the above-defined scripts for 0<x<1. The approach we will use here is simple :

  1. Loop through num from 0.01 to 0.99
  2. Calculate differential using hard-coded method & numerical differentiation
  3. Know if the value of the two computed values are close to each other or not.
  4. If the values aren’t close, store and display the value.
  5. Inspect the individual values where the results were not close.
DIFFS = []
for i in range(1,100):
num = i/100
approx_estimated_value = differentiate(f,num)
actual_true_value = differentialfx(num)
val = math.isclose(approx_estimated_value,
actual_true_value,rel_tol=1e-9)
if val == False:
print("Suitable Difference Found at",num)
DIFFS.append(num)

Output Obtained :

Suitable Difference Found at 0.02
Suitable Difference Found at 0.99

Note: These differences are occurring because of approximation errors. If we wish to check till n digits after the decimal, these differences can then be avoided by using n=6 (or a lower value) in math.isclose(approx_estimated_value, actual_true_value,rel_tol=1e-n). In this case no drastic differences are found.

Inspect the Differences :

for each_num in DIFFS:
print("Values at",each_num,":",differentiate(f,each_num,h=1e-6),
",",differentialfx(each_num))

Output:

Values at 0.02 : -1.0002000616626816 , -1.000200060020007 
Values at 0.99 : -7.088812059172223 , -7.088812050083354

The values only differ after 8 digits or so, hence running the script with math.isclose(approx_estimated_value, actual_true_value,rel_tol=1e-8) shows that there are no differences.

Yay! We were very accurate! (Source : The Culturist)

Higher Level Information

PyTorch (a Python deep learning module) has AutoGrad Features that employ Chain Rule Mechanisms of Differential Calculus using complex tree-like structures (graphs) that perform the same in a more efficient and faster way:

import torch
x = torch.autograd.Variable(torch.Tensor([0.5]),requires_grad=True)
def fnew(x):
return torch.asin(torch.sqrt(1-torch.pow(x,2)))
y = fnew(x)
y.backward()
print(float(x.grad))

However, the implementation from the math module will not work whilst working with PyTorch. We have to use the torch module to use autograd features. Since this is a vast topic beyond the scope of this discussion, here are some of the ways via which you can learn how to implement Automated Differentiation using AutoGrad in PyTorch:

Note : The implementation mentioned in this blog is for displaying the easiest way of differentiation — the numerical way but since it requires another function bypass call it should not be used on a large scale because a suitable time loss will occur. The below section shows the suitable implementation time differences.

Running Time of Scripts

After running the three methods separately and timing those scripts the following averaged timing data is obtained:

Over  100  Iterations : 
ClassicalDifferential : 6.438255310058594e-05 seconds. NumericalDifferential : 0.00017073869705200195 seconds. TorchDifferential : 0.010568954944610597 seconds.
Over 1000 Iterations :
ClassicalDifferential : 5.8611392974853516e-05 seconds. NumericalDifferential : 0.00013384795188903809 seconds. TorchDifferential : 0.010314790725708008 seconds.

Appendix: Running Time Codes —

Code for Running Time of Each Method

Further Reading & Understanding

If any/all of these don’t make complete sense to you, consider reading these articles to enhance your POC:

--

--

Farhan Hai Khan
Geek Culture

AI Team @DSC-IEM | WoC ’20 Mentor | ML Intern @IIT KGP | Independent Researcher | Electrical Engineering | IEM Kolkata