We’re introduced to the Manhattan distance in this lesson. What are some applications of this distance function? Is it used often in data science?
One place that the Manhattan distance shows up very often is when working with vectors. Suppose, for example, that we have two vectors:
v1 with coordinates
(x1, y1) and
v2 with coordinates
(x2, y2). The difference between
v2 is defined by how many units we must move horizontally and vertically to get from one vector to the next. Since the amount that we move in either direction is greater than or equal to zero, we need our difference to be non-negative. We can guarantee this by simply taking the absolute value. Therefore, our differences are
|x1 - x2| units horizontally and
|y1 - y2| units vertically, for a total of
|x1 - x2| + |y1 - y2| units; this is exactly Manhattan distance between the two vectors.
Note: You will also come across the Manhattan distance under other names. For example, taxicab metric and
L1 norm are common alternatives.