AWS Solutions Architect Professional Exam Dumps 2023

22 min read

AWS Solutions Architect Professional Exam Dumps 2023

AWS Solutions Architect Professional Certification Practice Tests 2023. Contains 900+ exam questions to pass the exam in first attempt.

SkillCertPro offers real exam questions for practice for all major IT certifications.

  • For a full set of 900+ questions. Go to

https://skillcertpro.com/product/aws-solutions-architect-professional-exam-questions/

  • SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
  • It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
  • SkillCertPro updates exam questions every 2 weeks.
  • You will get life time access and life time free updates  
  • SkillCertPro assures 100% pass guarantee in first attempt.

 

Below are the free 10 sample questions.

Question 1:

As a solution architect professional you have been requested to ensure that monitoring can be carried out for EC2 instances which are located in different AWS regions? Which of the below options can be used to accomplish this.

 A. Create separate dashboards in every region
 B. Register instances running on different regions to CloudWatch
 C. Have one single dashboard to report metrics to CloudWatch from different region
 D. This is not possible

Answer: C

Explanation:

You can monitor AWS resources in multiple regions using a single CloudWatch dashboard. For example, you can create a dashboard that shows CPU utilization for an EC2 instance located in the US-west-2 region with your billing metrics, which are located in the us-east-1 region. Please see the snapshot below which shows how a global dashboard looks like: Option A is incorrect because you can monitor AWS resources in multiple regions using a single CloudWatch dashboard. Option B is incorrect because you do not need to explicitly register any instances from different regions. Option C is CORRECT because you can monitor AWS resources in multiple regions using a single CloudWatch dashboard. Option D is incorrect because as mentioned in option C, the monitoring of EC2 instances is possible using a single dashboard created from CloudWatch matrix. For more information on Cloudwatch dashboard, please refer to the below URL: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cross_region_dashboard.html   The correct answer is: Have one single dashboard to report metrics to CloudWatch from different region.

Question 2:

You are working as a Solutions Architect for a credit company. They are running a customer analytics web application in AWS and your data analytics department has asked you to add a reporting tier to the application. This new component will aggregate and publish status reports every hour from user-generated information that is being stored in a Multi-AZ RDS MySQL database instance. The RDS instance is configured with Elastic ache as a database caching layer between the application tier and database tier. How do you implement a reporting tier with as little impact to your database as possible?

 A. Generate the reports by querying the Elastic ache database caching tier. Use Kibana to visualize the reports.
 B. Continuously send transaction logs from your master database to an S3 bucket and use S3 byte range requests to generate the reports off the S3 bucket. Use QuickSight to visualize the reports.
 C. Launch an RDS Read Replica linked to your Multi AZ master database and generate reports from the Read Replica. Use QuickSight to visualize the reports.
 D. Query the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ, and generate the report from the results. Use Kibana to visualize the reports.

 

Answer: C

Explanation:

Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.

Launching an RDS Read Replica linked to your Multi AZ master database and generating reports from the Read Replica, then using QuickSight to visualize the reports is correct because it uses the Read Replicas of the database for the querying of reports.

Querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ, and generating the report from the results then using Kibana to visualize the reports is incorrect because you cannot access the standby instance.

Continuously sending transaction logs from your master database to an S3 bucket and using S3 byte range requests to generate the reports off the S3 bucket then using QuickSight to visualize the reports is incorrect because sending the logs to S3 would add to the overhead on the database instance.

Generating the reports by querying the ElastiCache database caching tier then using Kibana to visualize the reports is incorrect because querying on ElastiCache may not always give you the latest and entire data, as the cache may not always be up-to-date.

Reference:

https://aws.amazon.com/rds/details/read-replicas

Question 3:

An auditor needs read-only access to all AWS resources and logs of all the events that have occurred on AWS. What is the best way for creating this sort of access?
Choose the correct answer from the options below:

 A. Create a role that has the required permissions for the auditor.
 B. Create an SNS notification that sends the CloudTrail log files to the auditor's email         when CloudTrail delivers the logs to S3, but do not allow the auditor access to the AWS environment.
 C. The company should contact AWS as part of the shared responsibility model, and AWS     will grant required access to the third-party auditor.
 D. Enable CloudTrail logging and create an IAM user who has read-only permissions to the required AWS resources, including the bucket containing the CloudTrail logs.


Answer: D

Explanation:

Option A is incorrect because just creating a role in not sufficient. CloudTrail logging needs to be enabled as well. Option B is incorrect because sending the logs via email is not a good architecture. Option C is incorrect because granting the auditor access to AWS resources is not AWS’s responsibility. It is the AWS user or account owner’s responsibility. Option D is CORRECT because you need to enable the CloudTrail logging in order to generate the logs with information about all the activities related to the AWS account and resources. It also creates an IAM user that has permissions to read the logs that are stored in the S3 bucket. More information on AWS CloudTrail AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting. For more information on CloudTrail, please visit the below URL: https://aws.amazon.com/cloudtrail/   the correct answer is: Enable CloudTrail logging and create an IAM user who has read-only permissions to the required AWS resources, including the bucket containing the CloudTrail logs.

 

Question 4:

An Amazon Redshift cluster with four nodes is running 24/7/365 and expects potentially to add one on-demand node for one to two days once during the year. Which architecture would have the lowest possible cost for the cluster requirement?
Choose the correct answer from the below options:

 A. Purchase 4 reserved nodes and rely on on-demand instances for the fifth node, if required.
 B. Purchase 5 reserved nodes to cover all possible usage during the year.
 C. Purchase 4 reserved nodes and bid on spot instances for the extra node if required.
 D. Purchase 2 reserved nodes and utilize 3 on-demand nodes only for peak usage times.

Answer: A

Explanation:

Option A is CORRECT because (a) the application requires 4 nodes throughout the year and reserved instances would save the cost, and (b) since the need of the other node is not assured, on-demand instance(s) can be purchased if and when needed. Option B is incorrect because reserving 5th node is unnecessary. Option C is incorrect because, even though the spot instances are cheaper than on-demand instances, they should only be used if the application is tolerant of sudden termination of them. Since the question does not mention this, purchasing spot instance(s) may not be a good option. Option D is incorrect because reserving only 2 instances would not be sufficient. Please find the below link for Reserved Instances: https://aws.amazon.com/ec2/pricing/reserved-instances/  the correct answer is: Purchase 4 reserved nodes and rely on on-demand instances for the fifth node, if required.

 

Question 5:

There is a requirement to move a legacy app to AWS. This legacy app still requires accessing some of the services on the on-premise architecture. Which of the 2 below options will help to fulfill this requirement?

 A. an AWS Direct Connect link between the VPC and the network housing the internal services.
 B. an Internet Gateway to allow a VPN connection.
 C. an Elastic IP address on the VPC instance.
 D. an IP address space that does not conflict with the one on-premises

Answer: A, D

Explanation:

The scenario requires you to connect your on-premise server/instance with Amazon VPC. When such scenarios are presented, always think about services such as Direct Connect, VPN, and VM Import and Export as they help either connecting the instances from different location or importing them from one location to another. Option A is CORRECT because Direct Connect sets up a dedicated connection between on-premise data-center and Amazon VPC, and provides you with the ability to connect your on-premise servers with the instances in your VPC. Option B is incorrect as you normally create a VPN connection based off of a customer gateway and a virtual private gateway (VPG) in AWS. Option C is incorrect as EIPs are not needed as the instances in the VPC can communicate with on-premise servers via their private IP address. Option D is CORRECT because, there should not be a conflict between IP address of on-premise servers and the instances in VPC for them to communicate. The correct answers are: An AWS Direct Connect link between the VPC and the network housing the internal services. An IP address space that does not conflict with the one on-premises.

  • For a full set of 900+ questions. Go to

https://skillcertpro.com/product/aws-solutions-architect-professional-exam-questions/

  • SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
  • It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
  • SkillCertPro updates exam questions every 2 weeks.
  • You will get life time access and life time free updates 
  • SkillCertPro assures 100% pass guarantee in first attempt.

 

Question 6:

Which of the following must be done while generating a pre-signed URL in S3 in order to ensure that the user who is given the pre-signed URL has the permission to upload the object?

 A. Ensure the user has write permission to S3.
 B. Ensure the user has read permission to S3.
 C. Ensure that the person who has created the pre-signed URL has the permission to upload the object to the appropriate S3 bucket.
 D. Create a Cloudfront distribution.

Answer: C

Explanation:

Option A is incorrect because if the person who has created the pre-signed URL does not have write permission to S3, the person who is given the pre-signed URL will not have it either. Option A is incorrect because if the person who has created the pre-signed URL does not have read permission to S3, the person who is given the pre-signed URL will not have it either. Option C is CORRECT because in order to successfully upload an object to S3, the pre-signed URL must be created by someone who has permission to perform the operation that the pre-signed URL is based upon. Option D is incorrect because CloudFront distribution is not needed in this scenario. For more information on pre-signed URL’s, please visit the below URL http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html   the correct answer is: Ensure that the person who has created the pre-signed URL has the permission to upload the object to the appropriate S3 bucket.

 

Question 7:

A user has setup Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance. How can the user configure this?

 A. The user can get an email using SNS when the CPU utilization is less than 10%. The user can use the desired capacity of Auto Scaling to remove the instance.
 B. Use CloudWatch to monitor the data and Auto Scaling to remove the instances using scheduled actions.
 C. Configure CloudWatch to send a notification to Auto Scaling Launch configuration when the CPU utilization is less than 10% and configure the Auto Scaling policy to remove the instance.
 D. Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance.

 

Answer: D

Explanation:

Option A is incorrect because for the user to get the notification, they have to configure CloudWatch which triggers a notification to Auto Scaling Group to terminate the instance. Updating the desired capacity will not work in this case. Option B is incorrect because scheduled scaling is used to scale your application in response to predictable load changes, not upon any notification. Option C is incorrect because the notification should be sent to Auto Scaling Group, not the launch configuration. Option D is CORRECT because the notification is sent to Auto Scaling Group which then removes an instance from the running instances. More information on Auto Scaling, Scheduled Actions: Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define. You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances. Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs. For more information on AutoScaling, please visit the link – https://aws.amazon.com/autoscaling/ https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html   the correct answer is: Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance.

Question 8:

You are the new IT architect in a company that operates a mobile sleep tracking application. When activated at night, the mobile app is sending collected data points of 1 KB every 5 minutes to your middleware. The middleware layer takes care of authenticating the user and writing the data points into an Amazon DynamoDB table. Every morning, you scan the table to extract and aggregate last night’s data on a per-user basis, and store the results in Amazon S3. Users are notified via Amazon SMS mobile push notifications that new data is available, which is parsed and visualized by the mobile app. currently, you have around 100k users. You have been tasked to optimize the architecture of the middleware system to lower the cost. What would you recommend?
Choose 2 options from below:

 A. Create a new Amazon DynamoDB table each day and drop the one for the previous day after its data is on Amazon S3.
 B. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3.
 C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
 D. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
 E. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.

Answer: A, C

Explanation:

Option A is CORRECT because (a) The data stored would be old/obsolete anyways and need not be stored; hence, lowering the cost, and (b) Storing the data in DynamoDB is expensive; hence, you should not keep the tables with the data not needed. Option B is incorrect because (a) Storing the data in DynamoDB is more expensive than S3, and (b) giving the app access to the DynamoDB to read the data is an operational overhead. Option C is CORRECT because (a) it uses SQS which reduce the provisioned output cutting down on the costs, and (b) acts as a buffer that absorbs sudden higher load, eliminating going over the provisioned capacity. Option D is incorrect because the data is only read once before its stored to S3. The cache would only be useful if you read things multiple times. Also, in this scenario optimizing “write” operations is most desired, not “read” ones. Option E is incorrect because (a) Amazon Redshift cluster is primarily used for OLAP transactions, not OLTP; hence, not suitable for this scenario, and (b) moving the storage to Redshift cluster means deploying large number of EC2 instances that are continuously running, which is not a cost-effective solution. For complete guidelines on working with DynamoDB, please visit the below URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html   The correct answers are: Create a new Amazon DynamoDB table each day and drop the one for the previous day after its data is on Amazon S3., Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.

 

Question 9:

Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest?
Choose 3 options from the below

 A. Implement third party volume encryption tools
 B. Do nothing as EBS volumes are encrypted by default
 C. Encrypt data inside your applications before storing it on EBS
 D. Encrypt data using native data encryption drivers at the file system level
 E. Implement SSL/TLS for all services running on the server

Answer: A, C, D

Explanation:

You can encrypt the data at rest by either using a native data encryption, using a third party encrypting tool, or just encrypt the data before storing on the volume. Option A CORRECT because it uses third party volume encryption tool. Option B is incorrect because EBS volumes are not encrypted by default. Option C is CORRECT as it encrypts the data before storing it on EBS. Option D is CORRECT as it uses the native data encryption. Option E is incorrect as SSL/TLS is used for the security of the data in transit, not at rest. The correct answers are: Implement third party volume encryption tools, Encrypt data inside your applications before storing it on EBS, Encrypt data using native data encryption drivers at the file system level.

 

Question 10:

What can be done in Cloudfront to ensure that as soon as the content is changed in the source, it is delivered to the client? Choose an answer from the options below options.

 A. Use fast invalidate feature provided in cloudfront
 B. Set TTL to 10 seconds
 C. Set TTL to 0 seconds
 D. Dynamic content cannot be served from the cloudfront
 E. You have to contact AWS support center to enable this feature

Answer: C

Explanation:

In CloudFront, to enforce the delivery of content to the user as soon as it gets changed by the origin, the time to live (TTL) should be set to 0. Option A is incorrect because invalidate is used to remove the content from CloudFront edge locations cache before it expires. The next time a viewer requests the object, CloudFront fetches the content from the origin; whereas, setting TTL to 0 enforces CloudFront to deliver the latest content as soon as origin updates it. Option B is incorrect because setting TTL to 10 will keep the content in cache for some time even though origin updates it. Option C is CORRECT because setting TTL to 0 will enforce the delivery of content to the user as soon as it gets changed by the origin. Option D is incorrect as CloudFront surely serves dynamic content. Option E is incorrect as you do not have to contact AWS support center for this scenario. More information on TTL in CloudFront You can control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. The low TTL is also given in the AWS documentation. For more information on CloudFront dynamic content, please refer to the below URL: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html  the correct answer is: Set TTL to 0 seconds

  • For a full set of 900+ questions. Go to

https://skillcertpro.com/product/aws-solutions-architect-professional-exam-questions/

  • SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
  • It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
  • SkillCertPro updates exam questions every 2 weeks.
  • You will get life time access and life time free updates 
  • SkillCertPro assures 100% pass guarantee in first attempt.

 

SkillCertPro 2
Joined: 1 year ago
In case you have found a mistake in the text, please send a message to the author by selecting the mistake and pressing Ctrl-Enter.
Comments (0)

    No comments yet

You must be logged in to comment.

Sign In / Sign Up

  • AWS Certification Training

    Top AWS Training Institutes in Bangalore Simplilearn: Simplilearn is a popular online learning platform that offers AWS courses, including AWS Certified Solut...

    Cocan Hanser · 17 February · 10
  • AWS Training and Certification

    Snowball Edge can likewise run Lambda and EC2-based applications locally, even without an organization association. This makes it ideal for use cases that need...

    Cocan Hanser · 16 February · 1