Scenario-Based AWS Interview Questions and Answers
Prepare for your Scenario-Based AWS Interview Questions and Answers. Master real-world situations to showcase your cloud expertise and problem-solving skills.
NEW QUESTION 1
One of the criteria for a new deployment is that the customer wants to use AWS Storage Gateway. However, you are not sure whether you should use gateway-cached volumes or gateway-stored volumes or even what the differences are. Which statement below best describes those differences?
- Gateway-cached lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally B. Gateway-stored enables you to configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.
- Gateway-cached is free whilst gateway-stored is not.
- Gateway-cached is up to 10 times faster than gateway-stored.
- Gateway-stored lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally
F. Gateway-cached enables you to configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.
Answer: A
Explanation:
Volume gateways provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application sewers. The gateway supports the following volume configurations:
Gateway-cached volumes — You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached volumes offer substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
Gateway-stored volumes — If you need low-latency access to your entire data set, you can configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive off-site backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.
NEW QUESTION 2
A user is storing a large number of objects on AWS S3. The user wants to implement the search functionality among the objects. How can the user achieve this?
- Use the indexing feature of S3.
- Tag the objects with the metadata to search on that.
- Use the query functionality of S3.
- Make your own DB system which stores the S3 metadata for the search functionality
Answer: D
Explanation:
In Amazon Web Services, AWS S3 does not provide any query facility. To retrieve a specific object the user needs to know the exact bucket / object key. In this case it is recommended to have its own DB system which manages the S3 metadata and key mapping.
NEW QUESTION 3
You are migrating an internal sewer on your DC to an EC2 instance with EBS volume. Your server disk usage is around 500GB so you just copied all your data to a 2TB disk to be used with AWS Import/Export. Where will the data be imported once it arrives at Amazon?
- to a 2TB EBS volume
- to an S3 bucket with 2 objects of 1TB
- to an 500GB EBS volume
- to an S3 bucket as a 2TB snapshot
Answer: B
Explanation:
An import to Amazon EBS will have different results depending on whether the capacity of your storage device is less than or equal to 1 TB or greater than 1 TB. The maximum size of an Amazon EBS snapshot is 1 TB, so if the device image is larger than 1 TB, the image is chunked and stored on Amazon S3. The target location is determined based on the total capacity of the device, not the amount of data on the device.
NEW QUESTION 4
A client needs you to import some existing infrastructure from a dedicated hosting provider to AWS to try and save on the cost of running his current website. He also needs an automated process that manages backups, software patching, automatic failure detection, and recovery. You are aware that his existing set up currently uses an Oracle database. Which of the following AWS databases would be best for accomplishing this task?
- Amazon RDS
- Amazon Redshift
- Amazon SimpIeDB
- Amazon EIastiCache
Answer: A
Explanation:
Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server, or PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery.
NEW QUESTION 5
Do Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance?
- Yes, they do but only if they are detached from the instance.
- No, you cannot attach EBS volumes to an instance.
- No, they are dependent.
- Yes, they d
Answer: D
Explanation:
An Amazon EBS volume behaves like a raw, unformatted, external block device that you can attach to a
single instance. The volume persists independently from the running life of an Amazon EC2 instance. Reference:
NEW QUESTION 6
Does DynamoDB support in-place atomic updates?
- Yes
- No
- It does support in-place non-atomic updates
- It is not defined
Answer: A
Explanation:
DynamoDB supports in-place atomic updates.
NEW QUESTION 7
Your manager has just given you access to multiple VPN connections that someone else has recently set up between all your company’s offices. She needs you to make sure that the communication between the VPNs is secure. Which of the following services would be best for providing a low-cost hub-and-spoke model for primary or backup connect between these remote offices?
- Amazon CloudFront
- AWS Direct Connect
- AWS C|oudHSM
- AWS VPN CIoudHub
Answer: D
Explanation:
If you have multiple VPN connections, you can provide secure communication between sites using the
AWS VPN CIoudHub. The VPN CIoudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectMty between these remote offices.
NEW QUESTION 8
You are in the process of creating a Route 53 DNS failover to direct traffic to two EC2 zones. Obviously, if one fails, you would like Route 53 to direct traffic to the other region. Each region has an ELB with some instances being distributed. What is the best way for you to configure the Route 53 health check?
- Route 53 doesn’t support ELB with an internal health check. You need to create your own Route 53 health check of the ELB B. Route 53 natively supports ELB with an internal health check
- Turn “Eva|uate target health” off and “Associate with Health Check” on and R53 will use the ELB’s internal health check.
- Route 53 doesn’t support ELB with an internal health check
- You need to associate your resource record set for the ELB with your own health check
- Route 53 natively supports ELB with an internal health check
- Turn “Eva|uate target health” on and “Associate with Health Check” off and R53 will use the ELB’s internal health check.
Answer: D
Explanation:
With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly. When you enable this feature, Route 53 uses health checks-regularly making Internet requests to your appIication’s endpoints from multiple locations around the world-to determine whether each endpoint of your application is up or down.
To enable DNS Failover for an ELB endpoint, create an Alias record pointing to the ELB and set the “EvaIuate Target HeaIth” parameter to true. Route 53 creates and manages the health checks for your ELB automatically. You do not need to create your own Route 53 health check of the ELB. You also do not need to associate your resource record set for the ELB with your own health check, because Route 53 automatically associates it with the health checks that Route 53 manages on your behalf. The ELB health check will also inherit the health of your backend instances behind that ELB.
NEW QUESTION 9
While using the EC2 GET requests as URLs, the is the URL that serves as the entry point for the web service.
- token
- endpoint
- action
- None of these
Answer: B
Explanation:
The endpoint is the URL that serves as the entry point for the web service.
NEW QUESTION 10
You are checking the workload on some of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes and it seems that the I/O latency is higher than you require. You should probably check the to make sure that your application is not trying to drive more IOPS than you have
provisioned.
- Amount of IOPS that are available
- Acknowledgement from the storage subsystem
- Average queue length
- Time it takes for the I/O operation to complete
Answer: C
Explanation:
In EBS workload demand plays an important role in getting the most out of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes. In order for your volumes to deliver the amount of IOPS that are available, they need to have enough I/O requests sent to them. There is a relationship between the demand on the volumes, the amount of IOPS that are available to them, and the latency of the request (the amount of time it takes for the I/O operation to complete). Latency is the true end-to-end client time of an I/O operation; in other words, when the client sends a IO, how long does it take to get an acknowledgement from the storage subsystem that the IO read or write is complete.
If your I/O latency is higher than you require, check your average queue length to make sure that your application is not trying to drive more IOPS than you have provisioned. You can maintain high IOPS while keeping latency down by maintaining a low average queue length (which is achieved by provisioning more IOPS for your volume).
NEW QUESTION 11
Which of the below mentioned options is not available when an instance is launched by Auto Scaling with EC2 Classic?
- Public IP
- Elastic IP
- Private DNS
- Private IP
Answer: B
Explanation:
Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS.
NEW QUESTION 12
You have been given a scope to deploy some AWS infrastructure for a large organization. The requirements are that you will have a lot of EC2 instances but may need to add more when the average utilization of your Amazon EC2 fileet is high and conversely remove them when CPU utilization is low. Which AWS services would be best to use to accomplish this?
- Auto Scaling, Amazon CloudWatch and AWS Elastic Beanstalk
- Auto Scaling, Amazon CloudWatch and Elastic Load Balancing.
- Amazon CloudFront, Amazon CloudWatch and Elastic Load Balancing.
- AWS Elastic Beanstalk , Amazon CloudWatch and Elastic Load Balancing
Answer: B
Explanation:
Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to manually provision Amazon EC2 capacity in advance. For example, you can set a condition to add new
Amazon EC2 instances in increments to the Auto Scaling group when the average utilization of your Amazon EC2 fileet is high; and similarly, you can set a condition to remove instances in the same increments when CPU utilization is low. If you have predictable load changes, you can set a schedule through Auto Scaling to plan your scaling actMties. You can use Amazon CloudWatch to send alarms to trigger scaling actMties and Elastic Load Balancing to help distribute traffic to your instances within Auto Scaling groups. Auto Scaling enables you to run your Amazon EC2 fileet at optimal utilization. Reference: http://aws.amazon.com/autoscaIing/
NEW QUESTION 13
You are building infrastructure for a data warehousing solution and an extra request has come through that there will be a lot of business reporting queries running all the time and you are not sure if your current DB instance will be able to handle it. What would be the best solution for this?
- DB Parameter Groups
- Read Replicas
- Multi-AZ DB Instance deployment
- Database Snapshots
Answer: B
AS
Explanation:
Read Replicas make it easy to take advantage of MySQL’s built-in replication functionality to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. There are a variety of scenarios where deploying one or more Read Replicas for a given source DB Instance may make sense. Common reasons for deploying a Read Replica include:
Scaling beyond the compute or I/O capacity of a single DB Instance for read-heavy database workloads. This excess read traffic can be directed to one or more Read Replicas.
Serving read traffic while the source DB Instance is unavailable. If your source DB Instance cannot take I/O requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your Read RepIica(s). For this use case, keep in mind that the data on the Read Replica may be “staIe” since the source DB Instance is unavailable.
Business reporting or data warehousing scenarios; you may want business reporting queries to run against a Read Replica, rather than your primary, production DB Instance.
NEW QUESTION 14
Much of your company’s data does not need to be accessed often, and can take several hours for retrieval time, so it’s stored on Amazon Glacier. However, someone within your organization has expressed concerns that his data is more sensitive than the other data, and is wondering whether the high level of encryption that he knows is on S3 is also used on the much cheaper Glacier service. Which of the following statements would be most applicable in regards to this concern?
- There is no encryption on Amazon Glacier, that’s why it is cheaper.
- Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3 but you can change it to AES-256 if you are willing to pay more.
- Amazon Glacier automatically encrypts the data using AES-256, the same as Amazon S3.
- Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3.
Answer: C
Explanation:
Like Amazon S3, the Amazon Glacier service provides low-cost, secure, and durable storage. But where S3 is designed for rapid retrieval, Glacier is meant to be used as an archival service for data that is not accessed often, and for which retrieval times of several hours are suitable.
Amazon Glacier automatically encrypts the data using AES-256 and stores it durably in an immutable form. Amazon Glacier is designed to provide average annual durability of 99.999999999% for an archive. It stores each archive in multiple facilities and multiple devices. Unlike traditional systems which can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks, and is built to be automatically self-healing.
NEW QUESTION 15
In Amazon RDS, security groups are ideally used to:
- Define maintenance period for database engines
- Launch Amazon RDS instances in a subnet
- Create, describe, modify, and delete DB instances
- Control what IP addresses or EC2 instances can connect to your databases on a DB instance
Answer: D
Explanation:
In Amazon RDS, security groups are used to control what IP addresses or EC2 instances can connect to your databases on a DB instance. When you first create a DB instance, its firewall prevents any database access except through rules specified by an associated security group.
NEW QUESTION 16
A user has launched 10 EC2 instances inside a placement group. Which of the below mentioned statements is true with respect to the placement group?
- All instances must be in the same AZ
- All instances can be across multiple regions
- The placement group cannot have more than 5 instances
- All instances must be in the same region
Answer: A
Explanation:
A placement group is a logical grouping of EC2 instances within a single Availability Zone. Using placement groups enables applications to participate in a low latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput or both.
NEW QUESTION 17
You have been doing a lot of testing of your VPC Network by deliberately failing EC2 instances to test whether instances are failing over properly. Your customer who will be paying the AWS bill for all this asks you if he being charged for all these instances. You try to explain to him how the billing works on EC2 instances to the best of your knowledge. What would be an appropriate response to give to the customer
in regards to this?
- Billing commences when the Amazon EC2 AM instance is completely up and billing ends as soon as the instance starts to shutdown.
B. Billing only commences only after 1 hour of uptime and billing ends when the instance terminates. - Billing commences when Amazon EC2 initiates the boot sequence of an AM instance and billing ends when the instance shuts down.
D. Billing commences when Amazon EC2 initiates the boot sequence of an AM instance and billing ends as soon as the instance starts to shutdown.
Answer: C
Explanation:
AS
Billing commences when Amazon EC2 initiates the boot sequence of an AM instance. Billing ends when the instance shuts down, which could occur through a web services command, by running “shutdown -h”, or through instance failure.
NEW QUESTION 18
You log in to IAM on your AWS console and notice the following message. “Delete your root access keys.” Why do you think IAM is requesting this?
- Because the root access keys will expire as soon as you log out.
- Because the root access keys expire after 1 week.
- Because the root access keys are the same for all users.
- Because they provide unrestricted access to your AWS resource
Answer: D
Explanation:
In AWS an access key is required in order to sign requests that you make using the command-line interface (CLI), using the AWS SDKs, or using direct API calls. Anyone who has the access key for your root account has unrestricted access to all the resources in your account, including billing information. One of the best ways to protect your account is to not have an access key for your root account. We recommend that unless you must have a root access key (this is very rare), that you do not generate one. Instead, AWS best practice is to create one or more AWS Identity and Access Management (IAM) users, give them the necessary permissions, and use IAM users for everyday interaction with AWS.
Reference:
NEW QUESTION 19
Once again, your customers are concerned about the security of their sensitive data and with their latest enquiry ask about what happens to old storage devices on AWS. What would be the best answer to this QUESTION ?
- AWS reformats the disks and uses them again.
- AWS uses the techniques detailed in DoD 5220.22-M to destroy data as part of the decommissioning process.
- AWS uses their own proprietary software to destroy data as part of the decommissioning process.
- AWS uses a 3rd party security organization to destroy data as part of the decommissioning proces
Answer: B
Explanation:
When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized indMduals.
AWS uses the techniques detailed in DoD 5220.22-M (“Nationa| Industrial Security Program Operating ManuaI “) or NIST 800-88 (“GuideIines for Media Sanitization”) to destroy data as part of the decommissioning process.
All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance
with industry-standard practices.
NEW QUESTION 20
A customer enquiry about whether all his data is secure on AWS and is especially concerned about Elastic Map Reduce (EMR) so you need to inform him of some of the security features in place for AWS. Which of the below statements would be an incorrect response to your customers enquiry?
- Amazon ENIR customers can choose to send data to Amazon S3 using the HTTPS protocol for secure transmission.
- Amazon S3 provides authentication mechanisms to ensure that stored data is secured against unauthorized access.
- Every packet sent in the AWS network uses Internet Protocol Security (IPsec).
- Customers may encrypt the input data before they upload it to Amazon S3.
Answer: C
Explanation:
Amazon S3 provides authentication mechanisms to ensure that stored data is secured against unauthorized access. Unless the customer who is uploading the data specifies otherwise, only that customer can access the data. Amazon EMR customers can also choose to send data to Amazon S3 using the HTTPS protocol for secure transmission. In addition, Amazon EMR always uses HTTPS to send data between Amazon S3 and Amazon EC2. For added security, customers may encrypt the input data before they upload it to Amazon S3 (using any common data compression tool); they then need to add a decryption step to the beginning of their cluster when Amazon EMR fetches the data from Amazon S3.
NEW QUESTION 21
You are in the process of building an online gaming site for a client and one of the requirements is that it must be able to process vast amounts of data easily. Which AWS Service would be very helpful in processing all this data?
- Amazon S3
- AWS Data Pipeline
- AWS Direct Connect
- Amazon EMR
Answer: D
Explanation:
Managing and analyzing high data volumes produced by online games platforms can be difficult. The back-end infrastructures of online games can be challenging to maintain and operate. Peak usage periods, multiple players, and high volumes of write operations are some of the most common problems that operations teams face.
Amazon Elastic MapReduce (Amazon EMR) is a service that processes vast amounts of data easily. Input data can be retrieved from web server logs stored on Amazon S3 or from player data stored in Amazon DynamoDB tables to run analytics on player behavior, usage patterns, etc. Those results can be stored again on Amazon S3, or inserted in a relational database for further analysis with classic business intelligence tools.
NEW QUESTION 22
You need to change some settings on Amazon Relational Database Service but you do not want the database to reboot immediately which you know might happen depending on the setting that you change. Which of the following will cause an immediate DB instance reboot to occur?
- You change storage type from standard to PIOPS, and Apply Immediately is set to true.
- You change the DB instance class, and Apply Immediately is set to false.
- You change a static parameter in a DB parameter group.
- You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0, and Apply Immediately is set to false. Answer: A
Explanation:
A DB instance outage can occur when a DB instance is rebooted, when the DB instance is put into a state that prevents access to it, and when the database is restarted. A reboot can occur when you manually reboot your DB instance or when you change a DB instance setting that requires a reboot before it can take effect.
A DB instance reboot occurs immediately when one of the following occurs:
You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0 and set Apply Immediately to true. You change the DB instance class, and Apply Immediately is set to true.
You change the storage type from standard to PIOPS, and Apply Immediately is set to true.
A DB instance reboot occurs during the maintenance window when one of the following occurs:
You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0, and Apply Immediately is set to false. You change the DB instance class, and Apply Immediately is set to false.
NEW QUESTION 23
Having set up a website to automatically be redirected to a backup website if it fails, you realize that there are different types of failovers that are possible. You need all your resources to be available the majority of the time. Using Amazon Route 53 which configuration would best suit this requirement?
- Active-active failover.
- Non
- Route 53 can’t failover.
- Active-passive failover.
- Active-active-passive and another mixed configuration
Answer: A
Explanation:
You can set up a variety of failover configurations using Amazon Route 53 alias: weighted, latency, geolocation routing, and failover resource record sets. Active-active failover: Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes unavailable, Amazon Route 53 can detect that it’s unhealthy and stop including it when responding to queries.
Active-passive failover: Use this failover configuration when you want a primary group of resources to be available the majority of the time and you want a secondary group of resources to be on standby in case all of the primary resources become unavailable. When responding to queries, Amazon Route 53 includes only the healthy primary resources. If all of the primary resources are unhealthy, Amazon Route 53 begins to include only the healthy secondary resources in response to DNS queries.
Active-active-passive and other mixed configurations: You can combine alias and non-alias resource record sets to produce a variety of Amazon Route 53 behaviors.
NEW QUESTION 24
You have been storing massive amounts of data on Amazon Glacier for the past 2 years and now start to wonder if there are any limitations on this. What is the correct answer to your QUESTION ?
- The total volume of data is limited but the number of archives you can store are unlimited.
- The total volume of data is unlimited but the number of archives you can store are limited.
- The total volume of data and number of archives you can store are unlimited.
- The total volume of data is limited and the number of archives you can store are limite
Answer: C
Explanation:
An archive is a durably stored block of information. You store your data in Amazon Glacier as archives. You may upload a single file as an archive, but your costs will be lower if you aggregate your data. TAR and ZIP are common formats that customers use to aggregate multiple files into a single file before uploading to Amazon Glacier.
The total volume of data and number of archives you can store are unlimited. IndMdual Amazon Glacier archives can range in size from 1 byte to 40 terabytes. The largest archive that can be uploaded in a single upload request is 4 gigabytes.
For items larger than 100 megabytes, customers should consider using the MuItipart upload capability. Archives stored in Amazon Glacier are immutable, i.e. archives can be uploaded and deleted but cannot be edited or overwritten.
NEW QUESTION 25
You are setting up your first Amazon Virtual Private Cloud (Amazon VPC) so you decide to use the VPC wizard in the AWS console to help make it easier for you. Which of the following statements is correct regarding instances that you launch into a default subnet via the VPC wizard?
- Instances that you launch into a default subnet receive a public IP address and 10 private IP addresses.
- Instances that you launch into a default subnet receive both a public IP address and a private IP address.
- Instances that you launch into a default subnet don’t receive any ip addresses and you need to define them manually.
- Instances that you launch into a default subnet receive a public IP address and 5 private IP addresses
AS
Answer: B
Explanation:
Instances that you launch into a default subnet receive both a public IP address and a private IP address. Instances in a default subnet also receive both public and private DNS hostnames. Instances that you launch into a nondefault subnet in a default VPC don’t receive a public IP address or a DNS hostname. You can change your subnet’s default public IP addressing behavior.
NEW QUESTION 26
An existing client comes to you and says that he has heard that launching instances into a VPC (virtual private cloud) is a better strategy than launching instances into a EC2-classic which he knows is what you currently do. You suspect that he is correct and he has asked you to do some research about this and get back to him. Which of the following statements is true in regards to what ability launching your instances into a VPC instead of EC2-Classic gives you?
- All of the things listed here.
- Change security group membership for your instances while they’re running
- Assign static private IP addresses to your instances that persist across starts and stops
- Define network interfaces, and attach one or more network interfaces to your instances
Answer: A
Explanation:
By launching your instances into a VPC instead of EC2-Classic, you gain the ability to: Assign static private IP addresses to your instances that persist across starts and stops Assign multiple IP addresses to your instances
Define network interfaces, and attach one or more network interfaces to your instances Change security group membership for your instances while they’re running
Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering) Add an additional layer of access control to your instances in the form of network access control lists (ACL)
Run your instances on single-tenant hardware
NEW QUESTION 27
Amazon S3 allows you to set per-file permissions to grant read and/or write access. However, you have decided that you want an entire bucket with 100 files already in it to be accessible to the public. You don’t want to go through 100 files indMdually and set permissions. What would be the best way to do this?
- Move the bucket to a new region
- Add a bucket policy to the bucket.
- Move the files to a new bucket.
- Use Amazon EBS instead of S3
Answer: B
Explanation:
Amazon S3 supports several mechanisms that give you flexibility to control who can access your data as well as how, when, and where they can access it. Amazon S3 provides four different access control mechanisms: AWS Identity and Access Management (IAM) policies, Access Control Lists (ACLs), bucket policies, and query string authentication. IAM enables organizations to create and manage multiple users under a single AWS account. With IAM policies, you can grant IAM users fine-grained control to your Amazon S3 bucket or objects. You can use ACLs to selectively add (grant) certain permissions on indMdual objects. Amazon S3 bucket policies can be used to add or deny permissions across some or all of the objects within a single bucket. With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are
valid for a specified period of time.
NEW QUESTION 28
A user is accessing an EC2 instance on the SSH port for IP 10.20.30.40. Which one is a secure way to configure that the instance can be accessed only from this IP?
- In the security group, open port 22 for IP 10.20.30.40
- In the security group, open port 22 for IP 10.20.30.40/32
- In the security group, open port 22 for IP 10.20.30.40/24
- In the security group, open port 22 for IP 10.20.30.40/0
Answer: B
Explanation:
In AWS EC2, while configuring a security group, the user needs to specify the IP address in CIDR notation. The CIDR IP range 10.20.30.40/32 says it is for a single IP 10.20.30.40. If the user specifies the IP as 10.20.30.40 only, the security group will not accept and ask it in a CIRD format.
NEW QUESTION 29
An accountant asks you to design a small VPC network for him and, due to the nature of his business, just needs something where the workload on the network will be low, and dynamic data will be accessed infrequently. Being an accountant, low cost is also a major factor. Which EBS volume type would best suit his requirements?
- Magnetic
- Any, as they all perform the same and cost the same.
- General Purpose (SSD)
- Magnetic or Provisioned IOPS (SSD)
Answer: A
AS
Explanation:
You can choose between three EBS volume types to best meet the needs of their workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that we recommend as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium sized databases, development and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance, and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types. Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.
NEW QUESTION 30
A user is planning to launch a scalable web application. Which of the below mentioned options will not affect the latency of the application?
- Region.
- Provisioned IOPS.
- Availability Zone.
- Instance siz
Answer: C
Explanation:
In AWS, the instance size decides the I/O characteristics. The provisioned IOPS ensures higher throughput, and lower latency. The region does affect the latency; latency will always be less when the instance is near to the end user. Within a region the user uses any AZ and this does not affect the latency. The AZ is mainly for fault toleration or HA.
NEW QUESTION 31
In Amazon EC2, if your EBS volume stays in the detaching state, you can force the detachment by clicking .
- Force Detach
- Detach Instance
- AttachVoIume
- Attachlnstance
Answer: A
Explanation:
If your volume stays in the detaching state, you can force the detachment by clicking Force Detach.
NEW QUESTION 32
Which IAM role do you use to grant AWS Lambda permission to access a DynamoDB Stream?
- Dynamic role
- Invocation role
- Execution role
- Event Source role
Answer: C
Explanation:
You grant AWS Lambda permission to access a DynamoDB Stream using an IAM role known as the “execution ro|e”.
NEW QUESTION 33
You are signed in as root user on your account but there is an Amazon S3 bucket under your account that you cannot access. What is a possible reason for this?
- An IAM user assigned a bucket policy to an Amazon S3 bucket and didn’t specify the root user as a principal
- The S3 bucket is full.
- The S3 bucket has reached the maximum number of objects allowed.
- You are in the wrong availability zone
Answer: A
Explanation:
With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users can access. In some cases, you might have an IAM user with full access to IAM and Amazon S3. If the IAM user assigns a bucket policy to an Amazon S3 bucket and doesn’t specify the root user as a principal, the root user is denied access to that bucket. However, as the root user, you can still access the bucket by modifying the bucket policy to allow root user access.
NEW QUESTION 34
Select the correct statement: Within Amazon EC2, when using Linux instances, the device name
/dev/sda1 is .
- reserved for EBS volumes
- recommended for EBS volumes
- recommended for instance store volumes
AS
- reserved for the root device
Answer: D
Explanation:
Within Amazon EC2, when using a Linux instance, the device name /dev/sda1 is reserved for the root device.
NEW QUESTION 35
A user is planning to make a mobile game which can be played online or offline and will be hosted on EC2.
The user wants to ensure that if someone breaks the highest score or they achieve some milestone they can inform all their colleagues through email. Which of the below mentioned AWS services helps achieve this goal?
- AWS Simple Workflow Service.
- AWS Simple Email Service.
- Amazon Cognito
- AWS Simple Queue Servic
Answer: B
Explanation:
Amazon Simple Email Service (Amazon SES) is a highly scalable and cost-effective email-sending service for businesses and developers. It integrates with other AWS services, making it easy to send emails from applications that are hosted on AWS.
NEW QUESTION 36
You receive the following request from a client to quickly deploy a static website for them, specifically on AWS. The requirements are low-cost, reliable, online storage, and a reliable and cost-effective way to route customers to the website, as well as a way to deliver content with low latency and high data transfer speeds so that visitors to his website don’t experience unnecessary delays. What do you think would be the minimum AWS services that could fulfill the cIient’s request?
- Amazon Route 53, Amazon CIoudFront and Amazon VPC.
- Amazon S3, Amazon Route 53 and Amazon RDS
- Amazon S3, Amazon Route 53 and Amazon CIoudFront
- Amazon S3 and Amazon Route 53.
Answer: C
Explanation:
You can easily and inexpensively use AWS to host a website that uses client-side technologies (such as HTML, CSS, and JavaScript) and does not require server side technologies (such as PHP and ASP.NET). This type of site is called a static website, and is used to display content that does not change frequently. Before you create and deploy a static website, you must plan your architecture to ensure that it meets your requirements. Amazon S3, Amazon Route 53, and Amazon CIoudFront would be required in this instance.
NEW QUESTION 37
How long does an AWS free usage tier EC2 last for?
- Forever
- 12 Months upon signup
- 1 Month upon signup
- 6 Months upon signup
Answer: B
Explanation:
The AWS free usage tier will expire 12 months from the date you sign up. When your free usage expires or if your application use exceeds the free usage tiers, you simply pay the standard, pay-as-you-go service rates.
NEW QUESTION 38
Which of the following statements is true of tagging an Amazon EC2 resource?
- You don’t need to specify the resource identifier while terminating a resource.
- You can terminate, stop, or delete a resource based solely on its tags.
- You can’t terminate, stop, or delete a resource based solely on its tags.
- You don’t need to specify the resource identifier while stopping a resourc
Answer: C
Explanation:
You can assign tags only to resources that already exist. You can’t terminate, stop, or delete a resource based solely on its tags; you must specify the resource identifier.
NEW QUESTION 39
A user has created a CIoudFormation stack. The stack creates AWS services, such as EC2 instances, ELB, AutoScaIing, and RDS. While creating the stack it created EC2, ELB and AutoScaIing but failed to create RDS. What will C|oudFormation do in this scenario?
AS
- Rollback all the changes and terminate all the created services
- It will wait for the user’s input about the error and correct the mistake after the input
- CIoudFormation can never throw an error after launching a few services since it verifies all the steps before launching
- It will warn the user about the error and ask the user to manually create RDS
Answer: A
Explanation:
AWS CIoudFormation is an application management tool which provides application modeling, deployment, configuration, management and related actMties. The AWS CIoudFormation stack is a collection of AWS resources which are created and managed as a single unit when AWS CIoudFormation instantiates a template. If any of the services fails to launch, C|oudFormation will rollback all the changes and terminate or delete all the created services.
NEW QUESTION 40
A major client who has been spending a lot of money on his internet service provider asks you to set up an AWS Direct Connection to try and save him some money. You know he needs high-speed connectMty. Which connection port speeds are available on AWS Direct Connect?
- 500Mbps and 1Gbps
- 1Gbps and 10Gbps
- 100Mbps and 1Gbps
- 1Gbps
Answer: B
Explanation:
AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS cloud services.
Using AWS Direct Connect, data that would have previously been transported over the Internet can now be delivered through a private network connection between AWS and your datacenter or corporate network.
1Gbps and 10Gbps ports are available. Speeds of 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, and 500Mbps can be ordered from any APN partners supporting AWS Direct Connect.
NEW QUESTION 41
Is it possible to get a history of all EC2 API calls made on your account for security analysis and operational troubleshooting purposes?
- Yes, by default, the history of your API calls is logged.
- Yes, you should turn on the CIoudTraiI in the AWS console.
- No, you can only get a history of VPC API calls.
- No, you cannot store history of EC2 API calls on Amazon.
Answer: B
Explanation:
To get a history of all EC2 API calls (including VPC and EBS) made on your account, you simply turn on C|oudTrai| in the AWS Management Console.
NEW QUESTION 42
You have just set up yourfirst Elastic Load Balancer (ELB) but it does not seem to be configured properly. You discover that before you start using ELB, you have to configure the listeners for your load balancer. Which protocols does ELB use to support the load balancing of applications?
- HTTP and HTTPS
- HTTP, HTTPS , TCP, SSL and SSH
- HTTP, HTTPS , TCP, and SSL
- HTTP, HTTPS , TCP, SSL and SFTP
Answer: C
Explanation:
Before you start using Elastic Load BaIancing(ELB), you have to configure the listeners for your load balancer. A listener is a process that listens for connection requests. It is configured with a protocol and a port number for front-end (client to load balancer) and back-end (load balancer to back-end instance) connections. Elastic Load Balancing supports the load balancing of applications using HTTP, HTTPS (secure HTTP), TCP, and SSL (secure TCP) protocols. The HTTPS uses the SSL protocol to establish secure connections over the HTTP layer. You can also use SSL protocol to establish secure connections over the TCP layer. The acceptable ports for both HTTPS/SSL and HTTP/TCP connections are 25, 80, 443, 465, 587, and 1024-65535.
NEW QUESTION 43
What happens to Amazon EBS root device volumes, by default, when an instance terminates?
- Amazon EBS root device volumes are moved to IAM.
- Amazon EBS root device volumes are copied into Amazon RDS.
- Amazon EBS root device volumes are automatically deleted.
- Amazon EBS root device volumes remain in the database until you delete the
Answer: C
Explanation:
AS
By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. Reference:
NEW QUESTION 44
Having just set up your first Amazon Virtual Private Cloud (Amazon VPC) network, which defined a default network interface, you decide that you need to create and attach an additional network interface, known as an elastic network interface (ENI) to one of your instances. Which of the following statements is true regarding attaching network interfaces to your instances in your VPC?
- You can attach 5 EN|s per instance type.
- You can attach as many ENIs as you want.
- The number of ENIs you can attach varies by instance type.
- You can attach 100 ENIs total regardless of instance typ
Answer: C
Explanation:
Each instance in your VPC has a default network interface that is assigned a private IP address from the IP address range of your VPC. You can create and attach an additional network interface, known as an elastic network interface (ENI), to any instance in your VPC. The number of EN|s you can attach varies by instance type.
NEW QUESTION 45
A for a VPC is a collection of subnets (typically private) that you may want to designate for your backend RDS DB Instances.
- DB Subnet Set
- RDS Subnet Group
- DB Subnet Group
- DB Subnet Collection
Answer: C
Explanation:
DB Subnet Groups are a set of subnets (one per Availability Zone of a particular region) designed for your DB instances that reside in a VPC. They make easy to manage Multi-AZ deployments as well as the conversion from a Single-AZ to a Mut|i-AZ one.
NEW QUESTION 46
Amazon Elastic Load Balancing is used to manage traffic on a fileet of Amazon EC2 instances, distributing traffic to instances across all availability zones within a region. Elastic Load Balancing has all the advantages of an on-premises load balancer, plus several security benefits.
Which of the following is not an advantage of ELB over an on-premise load balancer?
- ELB uses a four-tier, key-based architecture for encryption.
- ELB offers clients a single point of contact, and can also serve as the first line of defense against attacks on your network. C. ELB takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer. D. ELB supports end-to-end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections.
Answer: A
Explanation:
Amazon Elastic Load Balancing is used to manage traffic on a fileet of Amazon EC2 instances, distributing traffic to instances across all availability zones within a region. Elastic Load Balancing has all the advantages of an on-premises load balancer, plus several security benefits:
Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer Offers clients a single point of contact, and can also serve as the first line of defense against attacks on your network
When used in an Amazon VPC, supports creation and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options
Supports end-to-end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections. When TLS is used, the TLS server certificate used to terminate client connections can be managed centrally on the load balancer, rather than on every indMdual instance.
NEW QUESTION 47
You are setting up your first Amazon Virtual Private Cloud (Amazon VPC) network so you decide you should probably use the AWS Management Console and the VPC Wizard. Which of the following is not an option for network architectures after launching the “Start VPC Wizard” in Amazon VPC page on the AWS Management Console?
- VPC with a Single Public Subnet Only
- VPC with a Public Subnet Only and Hardware VPN Access
- VPC with Public and Private Subnets and Hardware VPN Access
- VPC with a Private Subnet Only and Hardware VPN Access
Answer: B
Explanation:
Amazon VPC enables you to build a virtual network in the AWS cloud – no VPNs, hardware, or physical datacenters required. Your AWS resources are automatically provisioned in a ready-to-use default VPC. You can choose to create additional VPCs by going to Amazon VPC page on the AWS Management Console and click on the “Start VPC Wizard” button.
You’II be presented with four basic options for network architectures. After selecting an option, you can modify the size and IP address range of the VPC and its subnets. If you select an option with Hardware VPN Access, you will need to specify the IP address of the VPN hardware on your network. You can modify the VPC to add more subnets or add or remove gateways at any time after the VPC has been created.
The four options are:
VPC with a Single Public Subnet Only VPC with Public and Private Subnets
AS
VPC with Public and Private Subnets and Hardware VPN Access VPC with a Private Subnet Only and Hardware VPN Access
NEW QUESTION 48
A user is trying to launch a similar EC2 instance from an existing instance with the option “Launch More like this”. The AMI ofthe selected instance is deleted. What will happen in this case?
- AWS does not need an AMI for the “Launch more like this” option
- AWS will launch the instance but will not create a new AMI
- AWS will create a new AMI and launch the instance
- AWS will throw an error saying that the AMI is deregistered
Answer: D
Explanation:
If the user has deregistered the AMI of an EC2 instance and is trying to launch a similar instance with the option “Launch more like this”, AWS will throw an error saying that the AMI is deregistered or not available.
NEW QUESTION 49
Your company has multiple IT departments, each with their own VPC. Some VPCs are located within the same AWS account, and others in a different AWS account. You want to peer together all VPCs to enable the IT departments to have full access to each others’ resources. There are certain limitations placed on VPC peering. Which of the following statements is incorrect in relation to VPC peering?
- Private DNS values cannot be resolved between instances in peered VPCs.
- You can have up to 3 VPC peering connections between the same two VPCs at the same time.
- You cannot create a VPC peering connection between VPCs in different regions.
- You have a limit on the number active and pending VPC peering connections that you can have per VPC.
Answer: B
Explanation:
To create a VPC peering connection with another VPC, you need to be aware of the following limitations and rules:
You cannot create a VPC peering connection between VPCs that have matching or overlapping CIDR blocks.
You cannot create a VPC peering connection between VPCs in different regions.
You have a limit on the number active and pending VPC peering connections that you can have per VPC. VPC peering does not support transitive peering relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC may be peered with. This includes VPC peering connections that are established entirely within your own AWS account.
You cannot have more than one VPC peering connection between the same two VPCs at the same time. The Maximum Transmission Unit (MTU) across a VPC peering connection is 1500 bytes.
A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. Unicast reverse path forwarding in VPC peering connections is not supported.
You cannot reference a security group from the peer VPC as a source or destination for ingress or egress rules in your security group. Instead, reference CIDR blocks of the peer VPC as the source or destination of your security group’s ingress or egress rules.
Private DNS values cannot be resolved between instances in peered VPCs.
NEW QUESTION 50
In the most recent company meeting, your CEO focused on the fact that everyone in the organization needs to make sure that all of the infrastructure that is built is truly scalable. Which of the following statements is incorrect in reference to scalable architecture?
- A scalable service is capable of handling heterogeneity.
- A scalable service is resilient.
- A scalable architecture won’t be cost effective as it grows.
- Increasing resources results in a proportional increase in performanc
Answer: C
Explanation:
In AWS it is critical to build a scalable architecture in order to take advantage of a scalable infrastructure. The cloud is designed to provide conceptually infinite scalability. However, you cannot leverage all that scalability in infrastructure if your architecture is not scalable. Both have to work together. You will have to identify the monolithic components and bottlenecks in your architecture, identify the areas where you cannot leverage the on-demand provisioning capabilities in your architecture, and work to refactor your application, in order to leverage the scalable infrastructure and take advantage of the cloud. Characteristics of a truly scalable application:
Increasing resources results in a proportional increase in performance A scalable service is capable of handling heterogeneity A scalable service is operationally efficient A scalable service is resilient
A scalable service should become more cost effective when it grows (Cost per unit reduces as the number of units increases)
To explore more do visit: Click Here
Author:-
Gandhar Bodas
Call the Trainer and Book your free demo Class For AWS Call now!!!
| SevenMentor Pvt Ltd.
© Copyright 2021 | SevenMentor Pvt Ltd.