Deep Name Collisions

As the field of deep learning has matured, it has incorporated advancements in many fields of scientific inquiry (ie. it has eaten them whole). As such, practitioners have helpfully integrated the language of many sub-fields, to ease the process of learning and enhance cross-domain understanding.

Consider for example a deep neural network algorithm. There are some values that are produced during the process of inference (which change based on the inputs), and other values that are trained ahead of time to describe the problem (which do not change).

Neural Network

The following table describes possible names for each of these, and the originating field.

Things that change on each input Things that don't change on each input Where does this shit come from?
Neurons Synapses Biology
Variables/Features Parameters Statistics
Matrix Kernels Linear Algebra
Image/Pixels Filter Image Processing
Activations Weights #@&% you

Because newcomers may know only some of the terminology, it is convention that these terms may be used interchangeably; eg. features and filters, or activations and kernels. Or they can be combined for enhanced readability; eg. “neuron-feature-activations” and “kernel-parameter-filter-weights”. This makes deep learning easily accessible … and fun!.

This is really just an example, other helpful terminology equivalences include feature “maps” and channels, tensors and … multidimensional arrays.

Thanks science, and happy name colliding!