Deep Learning Fundamentals in Data Science

it courses

Welcome to "Deep Learning Fundamentals in Data Science"! Are you ready to embark on an exciting journey into the world of deep learning and artificial intelligence? Whether you are a complete beginner or an experienced data scientist, this tutorial is the perfect gateway to enhance your skills and elevate your career. Throughout this tutorial, we will walk you through the core concepts, techniques, and applications of deep learning, providing you with a solid foundation to conquer the ever-evolving landscape of data science.

Our engaging and motivation-driven approach will ensure that you not only learn the fundamentals, but also find the inspiration to push the boundaries of what's possible with deep learning. As we delve into the following six sections, we'll provide a comprehensive overview of the field, peppered with real-world examples and hands-on exercises:

Table of Contents:

  1. Introduction to Deep Learning: Discover the origins and significance of deep learning, and explore the differences between traditional machine learning and deep learning.

  2. Neural Networks and Activation Functions: Dive into the structure of neural networks, their building blocks, and the role of activation functions in transforming input data.

  3. Loss Functions and Optimization Algorithms: Learn about the key metrics used to evaluate model performance and the optimization techniques that drive learning.

  4. Convolutional Neural Networks (CNNs): Uncover the power of CNNs for image and video recognition, and learn how to build your own state-of-the-art models.

  5. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM): Delve into the world of sequential data, and understand how RNNs and LSTM models handle time series and natural language processing tasks.

  6. Applications and Best Practices: Explore real-world applications of deep learning across industries, and learn the best practices for designing, training, and deploying your models.

So, what are you waiting for? Let's dive into the fascinating world of deep learning and unlock the true potential of data science!

1. Introduction to Deep Learning

In this first section of our deep learning tutorial, we will provide a solid foundation for both beginners and advanced learners, exploring the origins and significance of deep learning. We'll also dive into the key differences between traditional machine learning and deep learning, setting the stage for the rest of the tutorial.

1.1 What is Deep Learning?

Deep learning is a subfield of machine learning that revolves around the concept of learning from data using artificial neural networks. These networks are inspired by the structure and functioning of the human brain, enabling computers to learn complex patterns and representations from vast amounts of data. Deep learning has emerged as a powerful tool, driving innovation across various industries and applications, such as computer vision, natural language processing, speech recognition, and more.

1.2 The Origins of Deep Learning

The roots of deep learning can be traced back to the 1940s, with the development of the perceptron by Frank Rosenblatt, which laid the groundwork for neural networks. The field has evolved through various stages, with key milestones including the introduction of the backpropagation algorithm in the 1980s, which enabled the training of multi-layer neural networks. In the early 2010s, deep learning experienced a significant breakthrough, with the advent of deep convolutional neural networks and their astounding performance in image recognition tasks.

1.3 Key Differences Between Machine Learning and Deep Learning

As we progress through this learning journey, it's essential to understand the differences between traditional machine learning and deep learning:

  1. Feature Engineering: In traditional machine learning, the success of a model largely depends on the quality of hand-crafted features. In contrast, deep learning models are capable of automatically learning meaningful features from raw data, significantly reducing the need for manual feature engineering.

  2. Model Complexity: Deep learning models typically consist of multiple layers and a large number of neurons, allowing them to learn more complex and hierarchical representations of the data. Machine learning models, on the other hand, are generally less complex and require more human intervention to fine-tune.

  3. Data Requirements: Deep learning models often require a large amount of data to reach their full potential. In comparison, traditional machine learning models can work effectively with smaller datasets.

  4. Hardware Demands: Deep learning models, particularly during the training phase, require powerful hardware, such as GPUs or TPUs, to handle their computational complexity. Traditional machine learning models can be trained on more modest hardware.

1.4 The Power of Deep Learning

Deep learning has emerged as a game-changer in the field of artificial intelligence, pushing the boundaries of what is possible. With deep learning models, we can now accurately identify objects in images, generate realistic text, translate languages, and even synthesize human-like voices. As we continue our learning journey through this tutorial, we'll delve deeper into the key concepts and techniques that have shaped the field, equipping you with the knowledge and skills required to excel in this ever-evolving domain.

In conclusion, this introductory section has set the stage for our deep learning tutorial, covering the basics and key differences between machine learning and deep learning. As we continue through the tutorial, both beginners and advanced learners will benefit from a deeper understanding of deep learning concepts, laying the groundwork for the exciting and engaging learning experiences that lie ahead.

2. Neural Networks and Activation Functions

In this section of the deep learning tutorial, we'll dive into the structure of neural networks and explore the crucial role of activation functions. Both beginners and advanced learners will benefit from understanding the building blocks of neural networks and how activation functions transform input data.

2.1 Understanding Neural Networks

A neural network is a collection of interconnected artificial neurons organized into layers. These neurons, also known as nodes or units, process input data and pass the results to the next layer. A typical neural network consists of three types of layers:

  1. Input Layer: This is the first layer of the network, responsible for receiving input data and passing it to the subsequent layers.

  2. Hidden Layers: These are the intermediate layers between the input and output layers. The number of hidden layers and neurons within them define the complexity and depth of the network. Deep learning models often have multiple hidden layers, hence the term "deep."

  3. Output Layer: This is the final layer of the network, responsible for producing the desired output, such as a predicted class or a numerical value.

The connections between neurons across layers have associated weights, which determine the strength of the connection. During the training process, these weights are adjusted to minimize the error between the predicted and actual outputs.

2.2 Activation Functions

Activation functions play a critical role in neural networks, as they introduce non-linearity into the model, allowing it to learn and approximate complex relationships within the data. Without activation functions, neural networks would be limited to modeling only linear relationships. Some common activation functions include:

  1. Sigmoid: The sigmoid function squashes input values into a range between 0 and 1. It is commonly used in binary classification problems.

  2. Hyperbolic Tangent (tanh): The tanh function is similar to the sigmoid function, but it maps input values to a range between -1 and 1. It is often used in hidden layers.

  3. Rectified Linear Unit (ReLU): The ReLU function is defined as the maximum of 0 and the input value. It is computationally efficient and has become the default activation function for many deep learning models.

  4. Leaky ReLU: The Leaky ReLU is a variation of the ReLU function that allows a small, non-zero gradient for negative input values. This can help mitigate the "dying ReLU" problem, where some neurons in the network become inactive and stop learning.

  5. Softmax: The softmax function is used in the output layer of multi-class classification problems. It converts the output values into probabilities, ensuring that they sum up to 1.

2.3 The Role of Activation Functions in Neural Networks

Activation functions are applied to the weighted sum of inputs at each neuron, transforming the data and passing it to the next layer. This transformation process enables neural networks to learn complex and non-linear patterns in the data. Choosing the right activation function for a specific layer or problem can significantly impact the performance of the model.

In conclusion, understanding the structure of neural networks and the role of activation functions is essential for both beginners and advanced learners in deep learning. As we continue through this tutorial, we'll build on these foundational concepts to explore more advanced topics, empowering you to harness the power of deep learning in your data science projects.

3. Loss Functions and Optimization Algorithms

In this section of the deep learning tutorial, we'll explore loss functions and optimization algorithms. Both beginners and advanced learners will benefit from understanding the key metrics used to evaluate model performance and the optimization techniques that drive learning.

3.1 Loss Functions

Loss functions, also known as cost functions or objective functions, quantify the difference between the predicted outputs and the actual outputs (ground truth) for a given dataset. They play a crucial role in the training process of neural networks, as they help determine the model's performance and guide the optimization of the weights. Some common loss functions include:

  1. Mean Squared Error (MSE): MSE is the average of the squared differences between the predicted and actual outputs. It is commonly used in regression problems.

  2. Cross-Entropy Loss: Cross-entropy loss measures the difference between two probability distributions. In classification tasks, it compares the predicted class probabilities with the true class labels. It is often used in binary and multi-class classification problems.

  3. Hinge Loss: Hinge loss is used in support vector machines (SVMs) and some neural networks for binary classification problems. It measures the distance between the true class label and the predicted class label.

  4. Huber Loss: Huber loss is a combination of the MSE and absolute error, making it less sensitive to outliers than the MSE. It is often used in robust regression tasks.

3.2 Optimization Algorithms

Optimization algorithms are used to update the weights of a neural network in order to minimize the loss function. They are an essential component of the learning process, as they determine how well the model can adapt and generalize to new data. Some common optimization algorithms include:

  1. Gradient Descent: Gradient descent is a first-order optimization algorithm that iteratively adjusts the weights by moving in the direction of the steepest decrease of the loss function. It is a simple and widely used optimization technique.

  2. Stochastic Gradient Descent (SGD): SGD is a variation of gradient descent that updates the weights using a single randomly selected training example, rather than the entire dataset. This makes the algorithm faster and more suitable for large-scale problems.

  3. Momentum: Momentum is a technique that accelerates the convergence of gradient-based optimization algorithms, such as SGD. It adds a momentum term to the weight update, allowing the algorithm to build up velocity in directions with consistent gradients, and dampening oscillations.

  4. Adaptive Moment Estimation (Adam): Adam is a popular optimization algorithm that combines the advantages of both momentum and adaptive learning rates. It computes adaptive learning rates for each weight, allowing the algorithm to converge faster and more efficiently.

3.3 The Importance of Loss Functions and Optimization Algorithms

Loss functions and optimization algorithms are critical components of the deep learning process. They work together to evaluate and improve the performance of a neural network, guiding the model towards the optimal set of weights. Selecting appropriate loss functions and optimization algorithms for a specific problem can significantly impact the model's performance and training time.

In conclusion, understanding loss functions and optimization algorithms is essential for both beginners and advanced learners in deep learning. As we continue through this tutorial, we'll build on these foundational concepts to delve deeper into advanced topics and techniques, equipping you with the knowledge and skills required to excel in your data science projects.

4. Convolutional Neural Networks (CNNs)

In this section of the deep learning tutorial, we'll uncover the power of Convolutional Neural Networks (CNNs) for image and video recognition. Both beginners and advanced learners will benefit from understanding the structure and principles of CNNs and learn how to build their own state-of-the-art models.

4.1 What are Convolutional Neural Networks?

Convolutional Neural Networks (CNNs) are a class of deep learning models designed specifically for processing grid-like data, such as images and videos. They are particularly effective in tasks like image classification, object detection, and image generation. CNNs are inspired by the organization and functioning of the human visual cortex, where neurons are spatially organized and respond to specific regions in the visual field.

4.2 Key Components of a CNN

CNNs consist of several types of layers that work together to extract and learn hierarchical features from the input data:

  1. Convolutional Layers: Convolutional layers apply a series of filters to the input data, detecting local patterns such as edges, textures, or shapes. These filters are learned by the model during training, allowing the network to focus on the most relevant features.

  2. Activation Layers: Similar to traditional neural networks, CNNs use activation functions to introduce non-linearity into the model. ReLU is the most commonly used activation function in CNNs.

  3. Pooling Layers: Pooling layers are used to reduce the spatial dimensions of the input data, helping to control the number of parameters and computations in the network. Common pooling techniques include max pooling and average pooling.

  4. Fully Connected Layers: Fully connected layers are used in the final part of the network to perform classification or regression tasks. They take the output of the last convolutional or pooling layer and produce the final output, such as class probabilities or a numerical value.

4.3 Building Your Own CNN

To build your own state-of-the-art CNN, follow these general steps:

  1. Define the architecture: Design the structure of the CNN, including the number of layers, their types (convolutional, activation, pooling, or fully connected), and the filter sizes.

  2. Initialize the weights: Initialize the weights of the filters and fully connected layers using appropriate techniques, such as Gaussian initialization or Xavier initialization.

  3. Prepare the data: Preprocess the input data, including resizing, normalization, and data augmentation.

  4. Train the model: Train the CNN using an appropriate loss function, optimization algorithm, and batch size. Monitor the model's performance on a validation set to avoid overfitting.

  5. Evaluate and fine-tune: Evaluate the model on a test dataset, and fine-tune the architecture, hyperparameters, or training procedure as needed to improve performance.

4.4 Applications of CNNs

CNNs have been widely adopted across various industries and applications, including:

  1. Image classification: CNNs can be used to classify images into different categories, such as identifying whether an image contains a cat or a dog.

  2. Object detection: CNNs can detect and localize multiple objects within an image, providing both the class and bounding box coordinates.

  3. Semantic segmentation: CNNs can be used to segment images into regions corresponding to different object classes, such as identifying roads, buildings, and vehicles in a satellite image.

  4. Image generation: CNNs can generate new images or modify existing ones, such as in style transfer or image inpainting.

In conclusion, understanding Convolutional Neural Networks is essential for both beginners and advanced learners in deep learning. As we continue through this tutorial, we'll build on the concepts and techniques of CNNs to explore more advanced topics and applications, empowering you to harness the power of deep learning in your data science projects.

5. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM)

In this section of the deep learning tutorial, we'll explore Recurrent Neural Networks (RNNs) and their powerful extension, Long Short-Term Memory (LSTM) networks. Both beginners and advanced learners will benefit from understanding the structure and principles of RNNs and LSTMs, as well as their applications in tasks involving sequential data.

5.1 What are Recurrent Neural Networks?

Recurrent Neural Networks (RNNs) are a class of neural networks designed to handle sequential data, such as time series, speech, and text. RNNs can model dependencies and relationships between elements in a sequence by maintaining an internal hidden state that is updated at each time step. This allows RNNs to capture context and learn patterns across sequences.

5.2 The Challenge of Long-Term Dependencies

While RNNs are capable of capturing short-term dependencies in sequences, they struggle with long-term dependencies due to the vanishing gradient problem. This issue arises when the gradients of the loss function with respect to the weights become very small, causing the weights to stop updating during training. As a result, RNNs have difficulty learning patterns that span over long sequences.

5.3 Long Short-Term Memory (LSTM) Networks

To address the challenge of long-term dependencies, Long Short-Term Memory (LSTM) networks were introduced. LSTMs are a type of RNN that includes a special gating mechanism, allowing them to selectively store and retrieve information over long sequences. This makes LSTMs capable of learning complex, long-term dependencies and relationships in the data.

5.4 Key Components of an LSTM Network

LSTM networks consist of memory cells and three types of gates that control the flow of information within the cell:

  1. Input Gate: The input gate decides which incoming information should be stored in the memory cell.

  2. Forget Gate: The forget gate determines which information from the previous memory cell should be discarded or retained.

  3. Output Gate: The output gate controls which information from the memory cell should be passed to the next layer or time step.

These gates work together to enable LSTMs to learn and remember long-term dependencies in the data.

5.5 Applications of RNNs and LSTMs

RNNs and LSTMs have been widely adopted across various industries and applications involving sequential data, such as:

  1. Natural Language Processing (NLP): RNNs and LSTMs are used in tasks like sentiment analysis, machine translation, and text summarization.

  2. Speech Recognition: RNNs and LSTMs can be employed to convert spoken language into written text or to identify the speaker's identity.

  3. Time Series Forecasting: RNNs and LSTMs can be used to predict future values in time series data, such as stock prices, weather conditions, or energy consumption.

  4. Music Generation: RNNs and LSTMs can learn patterns in music and generate new melodies or harmonies.

In conclusion, understanding Recurrent Neural Networks and Long Short-Term Memory networks is essential for both beginners and advanced learners in deep learning. As we continue through this tutorial, we'll build on the concepts and techniques of RNNs and LSTMs to explore more advanced topics and applications, equipping you with the knowledge and skills required to excel in your data science projects.

6. Applications and Best Practices

In this final section of the deep learning tutorial, we'll discuss various real-world applications of deep learning and share best practices for building and deploying effective models. Both beginners and advanced learners will benefit from understanding the broad range of deep learning applications and the practical advice for implementing models in real-world projects.

6.1 Real-World Applications of Deep Learning

Deep learning has revolutionized various industries and applications, including:

  1. Healthcare: Deep learning models are used for medical image analysis, drug discovery, and disease prediction.

  2. Autonomous Vehicles: Deep learning models power self-driving cars, helping them perceive and understand their environment, as well as make decisions.

  3. Natural Language Processing: Deep learning has led to significant advances in sentiment analysis, machine translation, and chatbot development.

  4. Computer Vision: Deep learning has transformed image classification, object detection, and facial recognition.

  5. Finance: Deep learning models are used for fraud detection, credit scoring, and algorithmic trading.

  6. Recommendation Systems: Deep learning models help personalize content and product recommendations for users, improving user experience and engagement.

6.2 Best Practices for Building and Deploying Deep Learning Models

To build and deploy effective deep learning models, consider the following best practices:

  1. Start with a strong foundation: Ensure that you have a good understanding of the core concepts and techniques in deep learning, including neural networks, activation functions, loss functions, and optimization algorithms.

  2. Choose the right model architecture: Select an appropriate model architecture based on the problem you're trying to solve and the data you're working with. Consider using existing architectures, such as CNNs for image-related tasks or LSTMs for sequential data.

  3. Preprocess your data: Properly preprocess your data, including normalization, data augmentation, and handling missing or unbalanced data.

  4. Regularize your model: Use regularization techniques, such as dropout, weight decay, or early stopping, to prevent overfitting and improve generalization.

  5. Tune hyperparameters: Experiment with different hyperparameters, such as learning rate, batch size, and the number of layers, to find the optimal configuration for your model.

  6. Monitor and evaluate model performance: Regularly monitor your model's performance on a validation set during training, and evaluate it on a test set to ensure it generalizes well to unseen data.

  7. Leverage transfer learning: Make use of pre-trained models and transfer learning to save time and computational resources, especially when working with limited training data.

  8. Keep up with the latest research: Stay up-to-date with the latest advancements in deep learning, as new techniques and architectures are continuously being developed and improved.

6.3 Conclusion

Deep learning is a powerful and versatile tool that has revolutionized various industries and applications. Understanding the fundamentals, as well as the advanced techniques and best practices, is essential for both beginners and advanced learners in deep learning. By following this tutorial and applying the concepts and techniques discussed, you will be well-equipped to harness the power of deep learning in your data science projects and create impactful solutions.

Deep Learning Fundamentals in Data Science PDF eBooks

Data Science and Machine Learning

The Data Science and Machine Learning is an advanced level PDF e-book tutorial or course with 533 pages. It was added on October 11, 2022 and has been downloaded 1929 times. The file size is 13.75 MB. It was created by Dirk P. Kroese, Zdravko I. Botev, Thomas Taimre, Radislav Vaisman.


Data science Crash Course

The Data science Crash Course is a beginner level PDF e-book tutorial or course with 107 pages. It was added on April 3, 2023 and has been downloaded 843 times. The file size is 368.53 KB. It was created by sharpsightlabs.


Science of Cyber-Security

The Science of Cyber-Security is a beginner level PDF e-book tutorial or course with 86 pages. It was added on December 20, 2014 and has been downloaded 23352 times. The file size is 667.19 KB. It was created by JASON The MITRE Corporation.


TypeScript Deep Dive

The TypeScript Deep Dive is an advanced level PDF e-book tutorial or course with 368 pages. It was added on September 14, 2018 and has been downloaded 2101 times. The file size is 1.68 MB. It was created by Basarat Ali Syed.


Oracle Database 11g: SQL Fundamentals

The Oracle Database 11g: SQL Fundamentals is a beginner level PDF e-book tutorial or course with 499 pages. It was added on December 10, 2013 and has been downloaded 70087 times. The file size is 2.12 MB. It was created by Puja Singh - Brian Pottle.


Philosophy of Computer Science

The Philosophy of Computer Science is a beginner level PDF e-book tutorial or course with 938 pages. It was added on October 5, 2020 and has been downloaded 4882 times. The file size is 4.99 MB. It was created by William J. Rapaport.


Data Structures

The Data Structures is an intermediate level PDF e-book tutorial or course with 161 pages. It was added on December 9, 2021 and has been downloaded 2288 times. The file size is 2.8 MB. It was created by Wikibooks Contributors.


AngularJS Fundamentals in 60 Minutes

The AngularJS Fundamentals in 60 Minutes is a beginner level PDF e-book tutorial or course with 102 pages. It was added on December 16, 2014 and has been downloaded 10711 times. The file size is 3.65 MB. It was created by Dan Wahlin.


Fundamentals of C++ Programming

The Fundamentals of C++ Programming is a beginner level PDF e-book tutorial or course with 766 pages. It was added on February 5, 2019 and has been downloaded 35393 times. The file size is 3.73 MB. It was created by Richard L. Halterman School of Computing Southern Adventist University.


db4o tutorial

The db4o tutorial is an intermediate level PDF e-book tutorial or course with 218 pages. It was added on December 26, 2013 and has been downloaded 1539 times. The file size is 538.99 KB. It was created by db4objects Inc..


Android Developer Fundamentals Course

The Android Developer Fundamentals Course is a beginner level PDF e-book tutorial or course with 566 pages. It was added on November 12, 2021 and has been downloaded 2139 times. The file size is 6.66 MB. It was created by Google Developer Training Team.


Computer Science

The Computer Science is an intermediate level PDF e-book tutorial or course with 647 pages. It was added on November 8, 2021 and has been downloaded 3043 times. The file size is 1.94 MB. It was created by Dr. Chris Bourke.


Introduction to Calculus - volume 2

The Introduction to Calculus - volume 2 is an advanced level PDF e-book tutorial or course with 632 pages. It was added on March 28, 2016 and has been downloaded 1205 times. The file size is 8 MB. It was created by J.H. Heinbockel.


Networking Fundamentals

The Networking Fundamentals is a beginner level PDF e-book tutorial or course with 56 pages. It was added on December 31, 2012 and has been downloaded 12541 times. The file size is 1.44 MB. It was created by BICSI.


Learning Apache Spark with Python

The Learning Apache Spark with Python is a beginner level PDF e-book tutorial or course with 147 pages. It was added on January 22, 2019 and has been downloaded 1167 times. The file size is 1.72 MB. It was created by Wenqiang Feng.


Data Structures and Algorithm Analysis (C++)

The Data Structures and Algorithm Analysis (C++) is an advanced level PDF e-book tutorial or course with 615 pages. It was added on December 15, 2014 and has been downloaded 7089 times. The file size is 3.07 MB. It was created by Clifford A. Shaffer.


An Introduction to Statistical Learning

The An Introduction to Statistical Learning is an advanced level PDF e-book tutorial or course with 612 pages. It was added on November 8, 2021 and has been downloaded 1694 times. The file size is 13.81 MB. It was created by Gareth James • Daniela Witten • Trevor Hastie • Robert Tibshirani.


Procreate: The Fundamentals

The Procreate: The Fundamentals is a beginner level PDF e-book tutorial or course with 38 pages. It was added on April 4, 2023 and has been downloaded 301 times. The file size is 2.45 MB. It was created by Procreate.


Fundamentals of Cryptology

The Fundamentals of Cryptology is an intermediate level PDF e-book tutorial or course with 503 pages. It was added on December 9, 2021 and has been downloaded 1898 times. The file size is 2.35 MB. It was created by Henk C.A. Tilborg.


jQuery Fundamentals

The jQuery Fundamentals is a beginner level PDF e-book tutorial or course with 108 pages. It was added on October 18, 2017 and has been downloaded 2849 times. The file size is 563.78 KB. It was created by Rebecca Murphey.


Fundamentals of Computer Programming with C#

The Fundamentals of Computer Programming with C# is a beginner level PDF e-book tutorial or course with 1122 pages. It was added on December 27, 2016 and has been downloaded 9418 times. The file size is 8.57 MB. It was created by Svetlin Nakov & Co.


Introduction to Computing

The Introduction to Computing is a beginner level PDF e-book tutorial or course with 266 pages. It was added on January 13, 2017 and has been downloaded 2774 times. The file size is 2.01 MB. It was created by David Evans University of Virginia .


Fundamentals of Python Programming

The Fundamentals of Python Programming is a beginner level PDF e-book tutorial or course with 669 pages. It was added on January 6, 2019 and has been downloaded 22711 times. The file size is 3.3 MB. It was created by Richard L. Halterman.


Fundamentals and GSM Testing

The Fundamentals and GSM Testing is an advanced level PDF e-book tutorial or course with 54 pages. It was added on December 8, 2016 and has been downloaded 1687 times. The file size is 784.04 KB. It was created by Marc Kahabka.


Excel 2013: Ranges & Tables

The Excel 2013: Ranges & Tables is an intermediate level PDF e-book tutorial or course with 15 pages. It was added on October 19, 2015 and has been downloaded 5085 times. The file size is 520.16 KB. It was created by Kennesaaw State University.


Excel 2016 - Ranges & Tables

The Excel 2016 - Ranges & Tables is an intermediate level PDF e-book tutorial or course with 15 pages. It was added on September 1, 2016 and has been downloaded 7583 times. The file size is 620.77 KB. It was created by Kennesaw State University.


SQL Queries

The SQL Queries is a beginner level PDF e-book tutorial or course with 42 pages. It was added on September 24, 2017 and has been downloaded 7214 times. The file size is 148.38 KB. It was created by Donnie Pinkston.


Apache Spark API By Example

The Apache Spark API By Example is a beginner level PDF e-book tutorial or course with 51 pages. It was added on December 6, 2016 and has been downloaded 860 times. The file size is 232.31 KB. It was created by Matthias Langer, Zhen He.


Computer Fundamentals

The Computer Fundamentals is a beginner level PDF e-book tutorial or course with 86 pages. It was added on August 17, 2017 and has been downloaded 13740 times. The file size is 772.52 KB. It was created by Dr Steven Hand.


Excel Fundamentals

The Excel Fundamentals is a beginner level PDF e-book tutorial or course with 60 pages. It was added on March 30, 2020 and has been downloaded 60192 times. The file size is 7.03 MB. It was created by St. George’s Information Services.


it courses