A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?
A company uses AWS Organizations to manage its AWS accounts. The organization root has a child OU that is named Department. The Department OU has a child OU that is named Engineering. The default FullAWSAccess policy is attached to the root, the Department OU. and the Engineering OU.
The company has many AWS accounts in the Engineering OU. Each account has an administrative 1AM role with the AdmmistratorAccess 1AM policy attached. The default FullAWSAccessPolicy is also attached to each account.
A DevOps engineer plans to remove the FullAWSAccess policy from the Department OU The DevOps engineer will replace the policy with a policy that contains an Allow statement for all Amazon EC2 API operations.
What will happen to the permissions of the administrative 1AM roles as a result of this change'?
* Impact of Removing FullAWSAccess and Adding Policy for EC2 Actions:
The FullAWSAccess policy allows all actions on all resources by default. Removing this policy from the Department OU will limit the permissions that accounts within this OU inherit from the parent OU.
Adding a policy that allows only Amazon EC2 API operations will restrict the permissions to EC2 actions only.
* Permissions of Administrative IAM Roles:
The administrative IAM roles in the Engineering OU have the AdministratorAccess policy attached, which grants full access to all AWS services and resources.
Since SCPs are restrictions that apply at the organizational level, removing FullAWSAccess and replacing it with a policy allowing only EC2 actions means that for all accounts in the Engineering OU:
They will have full access to EC2 actions due to the new SCP.
They will be restricted in other actions that are not covered by the SCP, hence, non-EC2 API actions will be denied.
* Conclusion:
All API actions on EC2 resources will be allowed.
All other API actions will be denied due to the absence of a broader allow policy.
A company wants to deploy a workload on several hundred Amazon EC2 instances. The company will provision the EC2 instances in an Auto Scaling group by using a launch template.
The workload will pull files from an Amazon S3 bucket, process the data, and put the results into a different S3 bucket. The EC2 instances must have least-privilege permissions and must use temporary security credentials.
Which combination of steps will meet these requirements? (Select TWO.)
To meet the requirements of deploying a workload on several hundred EC2 instances with least-privilege permissions and temporary security credentials, the company should use an IAM role and an instance profile. An IAM role is a way to grant permissions to an entity that you trust, such as an EC2 instance. An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. By using an IAM role and an instance profile, the EC2 instances can automatically receive temporary security credentials from the AWS Security Token Service (STS) and use them to access the S3 buckets. This way, the company does not need to manage or rotate any long-term credentials, such as IAM users or access keys.
To use an IAM role and an instance profile, the company should create an IAM role that has the appropriate permissions for S3 buckets. The permissions should allow the EC2 instances to read from the source S3 bucket and write to the destination S3 bucket. The company should also create a trust policy for the IAM role that specifies that EC2 is allowed to assume the role. Then, the company should add the IAM role to an instance profile. An instance profile can have only one IAM role, so the company does not need to create multiple roles or profiles for this scenario.
Next, the company should update the launch template to include the IAM instance profile. A launch template is a way to save launch parameters for EC2 instances, such as the instance type, security group, user data, and IAM instance profile. By using a launch template, the company can ensure that all EC2 instances in the Auto Scaling group have consistent configuration and permissions. The company should specify the name or ARN of the IAM instance profile in the launch template. This way, when the Auto Scaling group launches new EC2 instances based on the launch template, they will automatically receive the IAM role and its permissions through the instance profile.
The other options are not correct because they do not meet the requirements or follow best practices. Creating an IAM user and generating a secret key and token is not a good option because it involves managing long-term credentials that need to be rotated regularly. Moreover, embedding credentials in user data is not secure because user data is visible to anyone who can describe the EC2 instance. Creating a trust anchor and profile is not a valid option because trust anchors are used for certificate-based authentication, not for IAM roles or instance profiles. Modifying user data to use a new secret key and token is also not a good option because it requires updating user data every time the credentials change, which is not scalable or efficient.
References:
1: AWS Certified DevOps Engineer - Professional Certification | AWS Certification | AWS
2: DevOps Resources - Amazon Web Services (AWS)
3: Exam Readiness: AWS Certified DevOps Engineer - Professional
: IAM Roles for Amazon EC2 - AWS Identity and Access Management
: Working with Instance Profiles - AWS Identity and Access Management
: Launching an Instance Using a Launch Template - Amazon Elastic Compute Cloud
: Temporary Security Credentials - AWS Identity and Access Management
A company is using an Amazon Aurora cluster as the data store for its application. The Aurora cluster is configured with a single DB instance. The application performs read and write operations on the database by using the cluster's instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming maintenance window. The cluster must remain available with the least possible interruption during the maintenance window.
What should a DevOps engineer do to meet these requirements?
To meet the requirements, the DevOps engineer should do the following:
Turn on the Multi-AZ option on the Aurora cluster.
Update the application to use the Aurora cluster endpoint for write operations.
Update the Aurora cluster's reader endpoint for reads.
Turning on the Multi-AZ option will create a replica of the database in a different Availability Zone. This will ensure that the database remains available even if one of the Availability Zones is unavailable.
Updating the application to use the Aurora cluster endpoint for write operations will ensure that all writes are sent to both the primary and replica databases. This will ensure that the data is always consistent.
Updating the Aurora cluster's reader endpoint for reads will allow the application to read data from the replica database. This will improve the performance of the application during the maintenance window.
A company has a legacy application A DevOps engineer needs to automate the process of building the deployable artifact for the legacy application. The solution must store the deployable artifact in an existing Amazon S3 bucket for future deployments to reference
Which solution will meet these requirements in the MOST operationally efficient way?
This approach is the most operationally efficient because it leverages the benefits of containerization, such as isolation and reproducibility, as well as AWS managed services. AWS CodeBuild is a fully managed build service that can compile your source code, run tests, and produce deployable software packages. By using a custom Docker image that includes all dependencies, you can ensure that the environment in which your code is built is consistent. Using Amazon ECR to store Docker images lets you easily deploy the images to any environment. Also, you can directly upload the build artifacts to Amazon S3 from AWS CodeBuild, which is beneficial for version control and archival purposes.
Melissia
24 days agoHaydee
2 months agoTruman
3 months agoNida
3 months agoArlean
4 months agoFelicidad
4 months agoSophia
4 months agoGeorgeanna
5 months agoIluminada
5 months agoMariann
5 months agoShelia
6 months agoHoney
6 months agoAshlyn
6 months agoKanisha
7 months agoMireya
7 months agoTyisha
7 months agoCasie
7 months agoCheryl
8 months agoLon
8 months agoEmeline
9 months agoElmer
9 months agoJustine
10 months agoJosefa
10 months agoVernice
10 months agoMilly
10 months agoCherilyn
10 months agoHerman
11 months ago