You have a large corpus of written support cases that can be classified into 3 separate categories: Technical Support, Billing Support, or Other Issues. You need to quickly build, test, and deploy a service that will automatically classify future written requests into one of the categories. How should you configure the pipeline?
AutoML Natural Language is a service that allows you to quickly build, test and deploy natural language processing (NLP) models without needing to have expertise in NLP or machine learning. You can use it to train a classifier on your corpus of written support cases, and then use the AutoML API to perform classification on new requests. Once the model is trained, it can be deployed as a REST API. This allows the classifier to be integrated into your pipeline and be easily consumed by other systems.
You are training and deploying updated versions of a regression model with tabular data by using Vertex Al Pipelines. Vertex Al Training Vertex Al Experiments and Vertex Al Endpoints. The model is deployed in a Vertex Al endpoint and your users call the model by using the Vertex Al endpoint. You want to receive an email when the feature data distribution changes significantly, so you can retrigger the training pipeline and deploy an updated version of your model What should you do?
Prediction drift is the change in the distribution of feature values or labels over time. It can affect the performance and accuracy of the model, and may require retraining or redeploying the model. Vertex AI Model Monitoring allows you to monitor prediction drift on your deployed models and endpoints, and set up alerts and notifications when the drift exceeds a certain threshold. You can specify an email address to receive the notifications, and use the information to retrigger the training pipeline and deploy an updated version of your model. This is the most direct and convenient way to achieve your goal.Reference:
Vertex AI Model Monitoring
Monitoring prediction drift
Setting up alerts and notifications
You work with a team of researchers to develop state-of-the-art algorithms for financial analysis. Your team develops and debugs complex models in TensorFlow. You want to maintain the ease of debugging while also reducing the model training time. How should you set up your training environment?
A TPU VM is a virtual machine that has direct access to a Cloud TPU device. TPU VMs provide a simpler and more flexible way to use Cloud TPUs, as they eliminate the need for a separate host VM and network setup. TPU VMs also support interactive debugging tools such as TensorFlow Debugger (tfdbg) and Python Debugger (pdb), which can help researchers develop and troubleshoot complex models. A v3-8 TPU VM has 8 TPU cores, which can provide high performance and scalability for training large models. SSHing into the TPU VM allows the user to run and debug the TensorFlow code directly on the TPU device, without any network overhead or data transfer issues.Reference:
1: TPU VMs Overview
2: TPU VMs Quickstart
3: Debugging TensorFlow Models on Cloud TPUs
You are an ML engineer at a global shoe store. You manage the ML models for the company's website. You are asked to build a model that will recommend new products to the user based on their purchase behavior and similarity with other users. What should you do?
A recommender system is a type of machine learning system that suggests relevant items to users based on their preferences and behavior.Recommender systems are widely used in e-commerce, media, and entertainment industries to enhance user experience and increase revenue1
There are different types of recommender systems that use different filtering methods to generate recommendations. The most common types are:
Content-based filtering: This method uses the features of the items and the users to find the similarity between them.For example, a content-based recommender system for movies may use the genre, director, cast, and ratings of the movies, and the preferences, demographics, and history of the users, to recommend movies that are similar to the ones the user liked before2
Collaborative filtering: This method uses the feedback and ratings of the users to find the similarity between them and the items.For example, a collaborative filtering recommender system for books may use the ratings of the users for different books, and recommend books that are liked by other users who have similar ratings to the target user3
Hybrid method: This method combines content-based and collaborative filtering methods to overcome the limitations of each method and improve the accuracy and diversity of the recommendations.For example, a hybrid recommender system for music may use both the features of the songs and the artists, and the ratings and listening habits of the users, to recommend songs that match the user's taste and preferences4
Deep learning-based: This method uses deep neural networks to learn complex and non-linear patterns from the data and generate recommendations. Deep learning-based recommender systems can handle large-scale and high-dimensional data, and incorporate various types of information, such as text, images, audio, and video. For example, a deep learning-based recommender system for fashion may use the images and descriptions of the products, and the profiles and feedback of the users, to recommend products that suit the user's style and preferences.
For the use case of building a model that will recommend new products to the user based on their purchase behavior and similarity with other users, the best option is to build a collaborative-based filtering model. This is because collaborative filtering can leverage the implicit feedback and ratings of the users to find the items that are most likely to interest them.Collaborative filtering can also help discover new products that the user may not be aware of, and increase the diversity and serendipity of the recommendations3
The other options are not as suitable for this use case. Building a classification model or a regression model using the features as predictors is not a good idea, as these models are not designed for recommendation tasks, and may not capture the preferences and behavior of the users. Building a knowledge-based filtering model is not relevant, as this method uses the explicit knowledge and requirements of the users to find the items that meet their criteria, and does not rely on the purchase behavior or similarity with other users.
You are training an LSTM-based model on Al Platform to summarize text using the following job submission script:

You want to ensure that training time is minimized without significantly compromising the accuracy of your model. What should you do?
The training time of a machine learning model depends on several factors, such as the complexity of the model, the size of the data, the hardware resources, and the hyperparameters. To minimize the training time without significantly compromising the accuracy of the model, one should optimize these factors as much as possible.
One of the factors that can have a significant impact on the training time is the scale-tier parameter, which specifies the type and number of machines to use for the training job on AI Platform.The scale-tier parameter can be one of the predefined values, such as BASIC, STANDARD_1, PREMIUM_1, or BASIC_GPU, or a custom value that allows you to configure the machine type, the number of workers, and the number of parameter servers1
To speed up the training of an LSTM-based model on AI Platform, one should modify the scale-tier parameter to use a higher tier or a custom configuration that provides more computational resources, such as more CPUs, GPUs, or TPUs. This can reduce the training time by increasing the parallelism and throughput of the model training.However, one should also consider the trade-off between the training time and the cost, as higher tiers or custom configurations may incur higher charges2
The other options are not as effective or may have adverse effects on the model accuracy. Modifying the epochs parameter, which specifies the number of times the model sees the entire dataset, may reduce the training time, but also affect the model's convergence and performance. Modifying the batch size parameter, which specifies the number of examples per batch, may affect the model's stability and generalization ability, as well as the memory usage and the gradient update frequency.Modifying the learning rate parameter, which specifies the step size of the gradient descent optimization, may affect the model's convergence and performance, as well as the risk of overshooting or getting stuck in local minima3
Jamal
2 days agoSina
10 days agoSena
18 days agoKristeen
25 days agoYoko
1 month agoGlendora
1 month agoTarra
2 months agoErinn
2 months agoJimmie
2 months agoCherilyn
2 months agoGerri
3 months agoMattie
3 months agoEffie
3 months agoBong
3 months agoLacey
4 months agoEladia
4 months agoBette
4 months agoAntonio
4 months agoGail
5 months agoMollie
5 months agoLorenza
5 months agoMaryann
5 months agoTaryn
6 months agoRolande
6 months agoAntonio
6 months agoChu
6 months agoNickie
6 months agoLeonora
6 months agoLemuel
8 months agoLinette
8 months agoTamie
9 months agoNina
9 months agoYoko
10 months agoKenneth
11 months agoDaniel
12 months agoCasie
1 year agoGladys
1 year agoRessie
1 year agoRonnie
1 year agoClemencia
1 year agoMarta
1 year agoPenney
1 year agoTeddy
1 year agoStanford
1 year agoAngelyn
1 year agoJonell
1 year agoNickie
1 year agoNoe
1 year agoBlondell
1 year agoMurray
1 year agoChaya
1 year agoDorathy
1 year agoLenora
1 year agoCarey
1 year agoSage
1 year agoLura
1 year agoTheola
1 year agoSalina
1 year agoTheresia
1 year agoGeorgene
1 year agoBeth
2 years agoMargart
2 years agoThaddeus
2 years agoElfrieda
2 years agoJesse
2 years agoCaprice
2 years agoXochitl
2 years agopetal
2 years ago