A company releases a new application in a new AWS account. The application includes an AWS Lambda function that processes messages from an Amazon Simple Queue Service (Amazon SOS) standard queue. The Lambda function stores the results in an Amazon S3 bucket for further downstream processing. The Lambda function needs to process the messages within a specific period of time after the messages are published. The Lambda function has a batch size of 10 messages and takes a few seconds to process a batch of messages.
As load increases on the application's first day of service, messages in the queue accumulate at a greater rate than the Lambda function can process the messages. Some messages miss the required processing timelines. The logs show that many messages in the queue have data that is not valid. The company needs to meet the timeline requirements for messages that have valid data.
Which solution will meet these requirements?
Step 2: Using an SQS Dead-Letter Queue (DLQ) Configuring a dead-letter queue (DLQ) for SQS will ensure that messages with invalid data, or those that cannot be processed successfully, are moved to the DLQ. This prevents such messages from clogging the queue and allows the system to focus on processing valid messages.
Action: Configure an SQS dead-letter queue for the main queue.
Why: A DLQ helps isolate problematic messages, preventing them from continuously reappearing in the queue and causing processing delays for valid messages.
Step 3: Maintaining the Lambda Function's Batch Size Keeping the current batch size allows the Lambda function to continue processing multiple messages at once. By addressing the failed items separately, there's no need to increase or reduce the batch size.
Action: Maintain the Lambda function's current batch size.
Why: Changing the batch size is unnecessary if the invalid messages are properly handled by reporting failed items and using a DLQ.
This corresponds to Option D: Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SQS dead-letter queue.
A company uses AWS WAF to protect its cloud infrastructure. A DevOps engineer needs to give an operations team the ability to analyze log messages from AWS WAR. The operations team needs to be able to create alarms for specific patterns in the log output.
Which solution will meet these requirements with the LEAST operational overhead?
Step 2: Creating CloudWatch Metric Filters CloudWatch metric filters can be used to search for specific patterns in log data. The operations team can create filters for certain log patterns and set up alarms based on these filters.
Action: Instruct the operations team to create CloudWatch metric filters to detect patterns in the WAF log output.
Why: Metric filters allow the team to trigger alarms based on specific patterns without needing to manually search through logs.
This corresponds to Option A: Create an Amazon CloudWatch Logs log group. Configure the appropriate AWS WAF web ACL to send log messages to the log group. Instruct the operations team to create CloudWatch metric filters.
A DevOps engineer is setting up an Amazon Elastic Container Service (Amazon ECS) blue/green deployment for an application by using AWS CodeDeploy and AWS CloudFormation. During the deployment window, the application must be highly available and CodeDeploy must shift 10% of traffic to a new version of the application every minute until all traffic is shifted.
Which configuration should the DevOps engineer add in the CloudFormation template to meet these requirements?
This corresponds to Option B: Add the AWS::CodeDeployBlueGreen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the CodeDeployDefault.ECSLinear10PercentEvery1Minutes deployment configuration.
A company uses AWS WAF to protect its cloud infrastructure. A DevOps engineer needs to give an operations team the ability to analyze log messages from AWS WAR. The operations team needs to be able to create alarms for specific patterns in the log output.
Which solution will meet these requirements with the LEAST operational overhead?
Step 2: Creating CloudWatch Metric Filters CloudWatch metric filters can be used to search for specific patterns in log data. The operations team can create filters for certain log patterns and set up alarms based on these filters.
Action: Instruct the operations team to create CloudWatch metric filters to detect patterns in the WAF log output.
Why: Metric filters allow the team to trigger alarms based on specific patterns without needing to manually search through logs.
This corresponds to Option A: Create an Amazon CloudWatch Logs log group. Configure the appropriate AWS WAF web ACL to send log messages to the log group. Instruct the operations team to create CloudWatch metric filters.
A software team is using AWS CodePipeline to automate its Java application release pipeline The pipeline consists of a source stage, then a build stage, and then a deploy stage. Each stage contains a single action that has a runOrder value of 1.
The team wants to integrate unit tests into the existing release pipeline. The team needs a solution that deploys only the code changes that pass all unit tests.
Which solution will meet these requirements?
* Modify the Build Stage to Add a Test Action with a RunOrder Value of 2:
The build stage in AWS CodePipeline can have multiple actions. By adding a test action with a runOrder value of 2, the test action will execute after the initial build action completes.
* Use AWS CodeBuild as the Action Provider to Run Unit Tests:
AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.
Using CodeBuild to run unit tests ensures that the tests are executed in a controlled environment and that only the code changes that pass the unit tests proceed to the deploy stage.
Example configuration in CodePipeline:
{
'name': 'BuildStage',
'actions': [
{
'name': 'Build',
'actionTypeId': {
'category': 'Build',
'owner': 'AWS',
'provider': 'CodeBuild',
'version': '1'
},
'runOrder': 1
},
{
'name': 'Test',
'actionTypeId': {
'category': 'Test',
'owner': 'AWS',
'provider': 'CodeBuild',
'version': '1'
},
'runOrder': 2
}
]
}
By integrating the unit tests into the build stage and ensuring they run after the build process, the pipeline guarantees that only code changes passing all unit tests are deployed.
Truman
20 days agoNida
25 days agoArlean
2 months agoFelicidad
2 months agoSophia
2 months agoGeorgeanna
3 months agoIluminada
3 months agoMariann
3 months agoShelia
4 months agoHoney
4 months agoAshlyn
4 months agoKanisha
4 months agoMireya
5 months agoTyisha
5 months agoCasie
5 months agoCheryl
5 months agoLon
6 months agoEmeline
7 months agoElmer
7 months agoJustine
8 months agoJosefa
8 months agoVernice
8 months agoMilly
8 months agoCherilyn
8 months agoHerman
9 months ago