Convolutional networks usually start with a convolutional pipeline, where each layer applies a number of convolutional filters to highlight different things of interest. The convolutional kernels (filter parameters) are learned from the data at training time. Here's an example of convolution in action; highlighting all vertical edges.
The code for reproducing the figure (apart from the data is below).
import numpy as np from scipy.signal import convolve2d import cv2 import matplotlib.pyplot as plt x = cv2.imread("person1.jpg") x = np.mean(x, axis = -1) w = np.array([[0,1,1], [0,1,1], [0,1,1]]) w = w - np.mean(w) y = convolve2d(x, w) fig, ax = plt.subplots(1, 2, sharex = True, sharey = True) ax[0].imshow(x, cmap = 'gray') ax[1].imshow(y, cmap = 'gray') plt.show()
When a sequence of convolution operations are stacked together, we get powerful feature extraction operators, that are able to modify the data representation into something easy to recognize by computer. For example, the below example illustrated (part of) the processing pipeline of our real-time age estimation demo.
In addition to the convolutions, the pipeline consists of maxpooling and ReLU nonlinearities, with three dense layers on top.
Nice post . Keep updating Artificial intelligence Online Trining
VastaaPoista