Ongoing

Deep Learning With Applications

Room: Room 306, Bldg: Becton Building , FDU Metropolitan Campus, 960 River Road, Teaneck, New Jersey, United States, 07666 River Road, Teaneck

September 21 through November 2, 2024. Six Saturdays 1:30-4:30pm (9/21, 9/28, 10/5, 10/19, 10/26, 11/2). The IEEE North Jersey Section Communications Society Chapter is offering a course entitled "DEEP LEARNING WITH APPLICATIONS". Deep learning is a transformative field within artificial intelligence and machine learning that has revolutionized our ability to solve complex problems in various domains, including computer vision, natural language processing, and reinforcement learning. This hands-on course on deep learning is designed to provide students with an understanding how these amazing successes are made possible by drawing inspiration from the way that brains, both human and otherwise, operate. Students will gain a comprehensive foundation in the principles, techniques, and applications of deep neural networks. Learning how to solve real data-set based applications will teach students how to really apply deep learning with Python programming software. Participants will be asked to design and train deep neural networks to perform tasks such as image classification using commonly available data sets. However, participants are encouraged to apply the techniques from this course to other data sets according to their interests. Discuss with the instructor in order to propose your own project. More importantly, this will set the foundations for understanding and developing Generative AI applications. The IEEE North Jersey Section's Communications Society Chapter can arrange for providing IEEE CEUs - Continuing Education Units (for a $5 charge) upon completion of the course. Course prices: $75 for Undergrad/Grad/Life/ComSoc members, $100 for IEEE members, $150 for non-IEEE members Co-sponsored by: Education Committee Speaker(s): Thomas Long, Agenda: 1. Introduction to Neural Networks: Explore the fundamental concepts of artificial neural networks, backpropagation, activation functions, and gradient descent, laying the groundwork for deep learning understanding. 2. Introduction to PyTorch: Learn how to implement and train neural networks using PyTorch one of the most popular deep learning frameworks. Understand tensors. 3. Computer Vision Applications: Apply deep learning to computer vision problems, including image classification and object detection using Convolutional Neural Networks (CNNs) 4. Training and Optimizing Deep Neural Networks: Study techniques for training deep neural networks effectively, including optimization algorithms, weight initialization, regularization, and dropout. 5. Sequential Data Analysis: Explore how deep learning is used to analyze sequential data using Recurrent Neural Networks (RNNs). In particular, explore how neural networks are used in Natural Language Processing (NLP) tasks such as sentiment analysis and machine translation. 6. Generative AI: Overview of generative ai techniques that leverage the patterns present in a dataset to generate new content. Applications of generative ai include large language models such as ChatGPT and image generation models such as Midjourney and Stable Diffusion. This course assumes a basic understanding of machine learning concepts and programming skills in Python. Familiarity with linear algebra and calculus will be beneficial, but not mandatory. Statistical software (Python, Scikit-learn) and Deep Learning Frameworks (Pytorch, TensorFlow) will be used throughout the course for the exploration of different learning algorithms and for the creation of appropriate graphics for analysis. Learning objectives: Subjects covered include these and other deep learning related materials: artificial neural networks, training deep neural networks, RNN, CNN, image recognition, natural language processing, GANs, data processing techniques, and NN architectures. The course is intended to be subdivided into 3-hour sessions. Each lecture is further subdivided into lecture, guided and independent project based exercises to build experience with hands-on techniques. This course will be held at FDU - Teaneck, NJ campus. Checks should NOT be mailed to this address. Can bring checks in person or use online payments at registration. Email the organizer for any questions about course, registration, or other issues. Technical Requirements: Students will need access to the Python programming language. In addition to a standard Python installation, most programming exercises will use the package Scikit-learn. Basic programming skills and some familiarity with the Python language are assummed. Students are expected to be able to bring a laptop onto which most of these libraries can be pre-installed using python's pip install. Most of the coding in this course will use the Python programming language. Coding examples and labs will be distributed in the form of Juypter notebooks. In addition to standard Python, most programming exercises will use either the PyTorch or TensorFlow libraries. Books and other resources will be referenced. Room: Room 306, Bldg: Becton Building , FDU Metropolitan Campus, 960 River Road, Teaneck, New Jersey, United States, 07666

Introduction to Neural Networks and Deep Learning (Part I) – Cancelled!

Boston, Massachusetts, United States, Virtual: https://events.vtools.ieee.org/m/414504 Boston

THIS COURSE HAS BEEN CANCELLED! Course Format: Live Webinar, 4.0 hours of instruction! Series Overview: From the book introduction: “Neural networks and deep learning currently provides the best solutions to many problems in image recognition, speech recognition, and natural language processing.” This Part 1 and the planned Part 2 (to be confirmed) series of courses will teach many of the core concepts behind neural networks and deep learning. This is a live instructor-led introductory course on Neural Networks and Deep Learning. It is planned to be a two-part series of courses. The first course is complete by itself and covers a feedforward neural network (but not convolutional neural network in Part 1). It will be a pre-requisite for the planned Part 2 second course. The class material is mostly from the highly-regarded and free online book “Neural Networks and Deep Learning” by Michael Nielsen, plus additional material such as some proofs of fundamental equations not provided in the book. More from the book introduction: Reference book: “Neural Networks and Deep Learning” by Michael Nielsen, http://neuralnetworksanddeeplearning.com/ “We’ll learn the core principles behind neural networks and deep learning by attacking a concrete problem: the problem of teaching a computer to recognize handwritten digits. …it can be solved pretty well using a simple neural network, with just a few tens of lines of code, and no special libraries.” “But you don’t need to be a professional programmer.” The code provided is in Python, which even if you don’t program in Python, should be easy to understand with just a little effort. Benefits of attending the series: * Learn the core principles behind neural networks and deep learning. * See a simple Python program that solves a concrete problem: teaching a computer to recognize a handwritten digit. * Improve the result through incorporating more and more core ideas about neural networks and deep learning. * Understand the theory, with worked-out proofs of fundamental The demo Python program (updated from version provided in the book) can be downloaded from the speaker’s GitHub account. The demo program is run in a Docker container that runs on your Mac, Windows, or Linux personal computer; we plan to provide instructions on doing that in advance of the class. (That would be one good reason to register early if you plan to attend, in order that you can receive the straightforward instructions and leave yourself with plenty of time to prepare the Git and Docker software that are widely used among software professionals.) Course Background and Content: This is a live instructor-led introductory course on Neural Networks and Deep Learning. It is planned to be a two-part series of courses. The first course is complete by itself and covers a feedforward neural network (but not convolutional neural network in Part 1). It will be a pre-requisite for the planned Part 2 second course. The class material is mostly from the highly-regarded and free online book “Neural Networks and Deep Learning” by Michael Nielsen, plus additional material such as some proofs of fundamental equations not provided in the book. Outline: - Feedforward Neural Networks - Simple (Python) Network to classify a handwritten digit - Learning with Stochastic Gradient Descent - How the backpropagation algorithm work - Improving the way neural networks learn: - - Cross-entropy cost function - SoftMax activation function and log-likelihood cost function - Rectified Linear Unit - Overfitting and Regularization: - - L2 regularization - Dropout - Artificially expanding data set Pre-requisites: There is some heavier mathematics in learning the four fundamental equations behind backpropagation, so a basic familiarity with multivariable calculus and matrix algebra is expected, but nothing advanced is required. (The backpropagation equations can be also just accepted without bothering with the proofs since the provided Python code for the simple network just make use of the equations.) Basic familiarity with Python or similar computer language. Speaker(s): CL Kim, Boston, Massachusetts, United States, Virtual: https://events.vtools.ieee.org/m/414504