Introduction to AI Pt 9 - Refining our EEG with 't-SNE' from dense layer cnn Watch Video
Preview(s):
Gallery
Play Video: (Note: The default playback of the video is HD VERSION. If your browser is buffering the video slowly, please play the REGULAR MP4 VERSION or Open The Video below for better experience. Thank you!)
⏲ Duration: 15 min 35 sec ✓ Published: 22-May-2022
Description: Displaying the feature maps and convolutional filters are a great way to explain the performance of our CNN layers. But what of our dense multi-layer perceptron architectures? How can we explain to stakeholders 'how the magic happens' inside these structures?nnnEnter 't-SNE'; short for 't-distributed Stochastic Neighbour Embedding'. This technique takes very high dimensional data and reduces it to 2D or 3D. It does this by preserving Euclidean distance. Points that are 'close' in high dimensions
Play Video: (Note: The default playback of the video is HD VERSION. If your browser is buffering the video slowly, please play the REGULAR MP4 VERSION or Open The Video below for better experience. Thank you!)