Author: admin

  • ML0005 Discriminative and Generative

    What are the differences between discriminative and generative models?

    Answer

    Discriminative Models:
    Objective: Discriminative models are designed to draw a boundary between classes. They focus on modeling the conditional probability P(y∣x). They learn the mapping from features x to labels y without trying to model how the data was generated.
    Examples: Logistic Regression, Support Vector Machines (SVMs), Neural Network Classifiers

    Generative Models:
    Generative Models: Estimate the joint probability P(x,y), or P(x∣y) and P(y), to understand how data is generated. Using Bayes’ theorem, they can then deduce the conditional probability P(y∣x) for classification tasks.
    Examples: Naive Bayes, Hidden Markov Models (HMMs), Variational Autoencoders (VAEs), and Generative Adversarial Networks (GANs).


    Login to view more content

  • ML0004 Underfitting

    Which of the following descriptions is inaccurate in regard to underfitting?

    A. Underfitting occurs when a model is too simple to capture the underlying patterns from the data.

    B. When underfitting occurs, the model will have high bias and low variance.

    C. Increasing the model’s complexity and reducing regularization can address underfitting.

    D. An underfit model performs well with the training data but performs poorly on new, unseen data.

    Answer

    D
    Explanation:
    Underfitting means the model performs poorly on both the training data and the unseen test data because it hasn’t learned enough from the training set.


    Login to view more content
  • ML0003 Overfitting

    What is overfitting and how to avoid overfitting?

    Answer

    Overfitting happens when a model tries to learn the training data too well, including its noise and outliers, leading to poor performance on new, unseen data. The model becomes too specialized to the training data, failing to generalize to other data.

    To avoid overfitting:
    1. Simplify the model: Use less complex models.
    2. Get more data or use data agumentation: A larger dataset helps the model generalize.
    3. Regularization: Penalize complex models with techniques like L1/L2 regularization.
    4. Validation & Early Stopping: Validate frequently and stop training when performance plateaus.
    5. For neural networks, the dropout layer can also be used to avoid overfitting.


    Login to view more content
  • ML0002 Machine Learning Type

    What is the difference between supervised learning and unsupervised learning?

    Answer

    Supervised learning relies on labeled datasets, and each training sample comes with a label or output. The algorithm learns a mapping function that can predict the output, including new, unseen inputs.

    Unsupervised learning works with unlabeled Data. The algorithm aims to find hidden patterns or structures within the data.


    Login to view more content
  • ML0001 Loss Curve Plot

    The following training loss curves were plotted with different experiment settings. Which of these training loss curves most likely indicates the correct experiment settings?

    Answer

    A
    Explanation:
    In an ideal training environment, the training loss is expected to diminish steadily over time. This indicates that the model is learning and improving its performance over time.


    Login to view more content