Archive for August, 2017:

1Z0-404 Oracle Communications Session Border Controller 7 Basic Implementation Essentials

Exam Details
Duration: 120
Number of Questions:  70
Passing Score: 70%

Exam has been validated against Oracle Session Border Controller 7.

Format: Multple Choice

Take Recommended Training Courses
Complete one of the courses below to prepare for your exam (optional):
Oracle SBC Configuration and Administration
Oracle SBC Troubleshooting

Session Initiation Protocol (SIP) Essentials
Describe SIP and architecture elements: SIP proxies and back-to-back user agents (B2BUAs)
Diagnose and troubleshoot a basic SIP call flow processed by Oracle Session Border Controller

Introduction to Session Border Controller (SBC)
Describe the basic functions of a Session Border Controller
Describe the boot process and the SBC services

Initial Configuration
Explain the boot parameters and their effects
Describe the configuration concepts and configuration tree
Execute user and super-user level commands in the ACLI
Analyze, create, modify, and delete configuration elements
Perform routine operations including boot-related operations

Provisioning Interfaces
Describe the network interface’s default behavior and how it is altered
Provision physical interfaces
Provision network interfaces (VLAN and non-VLAN)
Enable/disable management operations through a media interface

Session Border Controller Concepts
Explain realms and realm bridging
Configure global SIP parameters and Media Manager
Configure realms, SIP interfaces, and steering pools
Configure routing policies, session agents, and header manipulation rules

Peering Environment Configuration
Describe the Policy-Based Realm Bridging (PBRB) configuration tasks in Peering environments
Configure a working Peering environment
Configure Peering access rules

Access-Backbone Environment Configuration
Explain registration caching, Hosted NAT Traversal (HNT), and Adaptive HNT
Configure the PBRB model in an Access-Backbone environment

Configuring SBC High Availability
Explain the operation of the high-availability mechanism and SBC node states
Configure a high-availability SBC pair
Manage a high-availability SBC pair system failover


QUESTION 1
The Session Border Controller ACLI is structured in a way that separates configuration of layers 3,4, and 5. This allows the system administrator to link each configuration together as needed for signaling and media routing purposes.
Which two options are valid to link signaling and media interfaces to a realm? (Choose two.)

A. Navigate to the iwf-config configuration element and set the media-interface-id parameter to the realm name.
B. Navigate to the account-config configuration element and set the realm-id parameter to the realm name.
C. Navigate to the network-interface configuration element and set the sip-interface-id parameter to the realm name.
D. Navigate to the steering-pool configuration element and set the realm-id parameter to the realm name.
E. Navigate to the sip-interface configuration element and set the realm-id parameter to the realm name.

Answer: A,B


QUESTION 2
You are configuring access rules in a Session Border ControllerPeering architecture.
Which two configuration steps are required to allow access only from User Agents (UAs) configured as session agents? (Choose two.)

A. Navigate to the sip-port configuration element and set the allow-anonymous parameter to all.
B. Navigate to the session-agent configuration element and set the ip-address parameter to the IP subnet of your trusted User Agent (UA).
C. Navigate to the realm-config configuration element and set the addr-prefix parameter to the IP subnet of your trusted User Agent (UA).
D. Navigate to the sip-port configuration element and set the allow-anonymous parameter to agents-only.
E. Navigate to the sip-port configuration element and set the allow-anonymous parameter to address-prefix.

Answer: B,E


QUESTION 3
You need to configure the Session Border Controller toperform load balancing between two downstream SIP proxies.
Which option shows the configuration elements that you should configure for the load balancing feature?

A. session-router and router-group
B. realm-config and enum-group
C. group-policy andload-policy
D. proxy and proxy-pool
E. local-policy and session-group

Answer: A


QUESTION 4
You were working with the Session Border Controller in configuration mode and you forgot toclose your session.
Your colleague who connected after you left says that he cannot enter into configuration mode.
Why is your colleague not able to configure the Session Border Controller?

A. The Session Border Controller does not allow more than one configuration session within 24 hours.
B. The Session Border Controller does not support Telnet/SSH timeouts.
C. The Session Border Controller supports only one simultaneous configuration session, and the Telnet/SSH timeouts are set to 0.
D. The Session Border Controller supports only 10 simultaneous configuration sessions.
E. The Session Border Controller supports only five simultaneous Telnet/SSH sessions.

Answer: C

Click here to view complete Q&A of 1Z0-404 exam
Certkingdom Review
, Certkingdom PDF

MCTS Training, MCITP Trainnig

Best Oracle 1Z0-404 Certification, Oracle 1Z0-404 Training at certkingdom.com


Continue Reading

C2150-602 IBM Security Intelligence Solution Advisor V1

Test information:
Number of questions: 48
Time allowed in minutes: 90
Required passing score: 60%
Languages: English

Related certifications:
IBM Certified Solution Advisor – Security Intelligence V1

The test consists of five sections containing a total of approximately 48 multiple-choice questions. The percentages after each section title reflect the approximate distribution of the total question set across the sections

Section 1 – Discover (35%)
Interpret the customer�s requirements.
Estimate the customer�s environment.
Identify and map requirements into product capabilities.
Deliver presentations.
Interpret RFP/RFQ to address functionality and components.
Explain the benefits of partnering with IBM Security.
Identify the business driver for security intelligence.

Section 2 – Scope (15%)
Discuss sizing and licensing considerations such as hardware requirements, number of regions/data centers, and network impacts.
Demonstrate how the integration can evolve with adoption of new components.
Identify the requirement for customization and deployment.

Section 3 – Plan (21%)
Construct a deployment plan (i.e., appliances needed, placement and licensing of the equipment).
Define general requirements for performance, capacity, security, reporting, availability, and regulations.
Define the feasibility requirements in terms of integration.
Prepare a conceptual view of the architecture.

Section 4 – Design (10%)
Finalize a customer�s use case.
Finalize a customer’s environment.

Section 5 – Consult (19%)
Outline the Bill of Material.
Deliver customized presentation solutions.
Explain detailed drawings.
Discuss design options, focusing on requirements, product capabilities and licensing.

IBM Certified Solution Advisor – Security Intelligence V1

Job Role Description / Target Audience
These solution advisors identify opportunities and influence direction across the IBM Security Intelligence portfolio.

Overall, these solution advisors are able to discover, scope, plan, design and consult. They recommend education, influence key decision makers, are able to respond to Request for Proposals (RFPs) and Request for Quotes (RFQs) and understand licensing and pricing.

These solution advisors also understand infrastructure and application security, and competitive analysis. They also have knowledge of the broader IBM Security portfolio and software development cycle, the IBM Security Intelligence products, business drivers and licensing.

These solution advisors are generally self-sufficient and able to perform most of the tasks involved in the job role with limited assistance.

To attain the IBM Certified Solution Advisor – Security Intelligence V1 certification, candidates must pass 1 test. To prepare for the test, it is recommended to refer to the job role description and recommended prerequisite skills, and click the link to the test below to refer to the test objectives and the Test preparation tab.

Recommended Prerequisite Skills
Basic understanding of IBM Security products with emphasis on IBM Security Intelligence portfolio.
General knowledge of network security practices.
Common knowledge of regulatory compliance.

Requirements
Test C2150-602 – IBM Security Intelligence Solution Advisor V1

The test:
contains questions requiring single and multiple answers. For multiple-answer questions, you need to choose all required options to get the answer correct. You will be advised how many options make up the correct answer.
is designed to provide diagnostic feedback on the Examination Score Report, correlating back to the test objectives, informing the test taker how he or she did on each section of the test. As a result, to maintain the integrity of each test, questions and answers are not distributed.

Click here to view complete Q&A of C2150-602 exam
Certkingdom Review
, Certkingdom C2150-602 PDF

 

MCTS Training, MCITP Trainnig

 

Best IBM C2150-602 Certification, IBM C2150-602 Training at certkingdom.com


Continue Reading

C5050-300 Foundations of IBM DevOps V1

Test information:
Number of questions: 61
Time allowed in minutes: 90
Required passing score: 70%
Languages: English, Japanese

Related certifications:
IBM Certified Solution Advisor – DevOps V1

Section 1 – DevOps Principles
Define DevOps
Summarize different development approaches
Explain and identify delivery pipelines
Explain lean principles
Explain DevOps practices
Describe Collaborative Development
Describe Continuous Integration
Describe Continuous Delivery
Describe Continuous Deployment
Describe Continuous Availability / Service Management / Monitoring
Describe Continuous Security / Security for DevOps
Explain Shift-Left Test /Continuous Test
Explain Shift Left Ops
Explain Multi-speed IT
Explain Continuous Feedback
Explain the implications of the �12 Factor app� design principles for DevOps
ITIL and DevOps

Section 2 – Adopting DevOps
Describe business and IT drivers of DevOps
Explain the barriers to adoption of DevOps
Explain how to build a roadmap for DevOps adoption
Explain how to adopt DevOps in Multi-speed IT environment
Explain other continuous improvement approaches
Illustrate the cultural & organizational differences when transforming from traditional to DevOps processes
Explain the benefits of Design Thinking for DevOps process adoption

Section 3 – IBM DevOps Reference Architecture & Methods
Describe IBM DevOps Reference Architecture pattern
Explain the IBM point of view on DevOps
Explain DevOps for Microservices
Explain DevOps for Cloud Native
Explain DevOps for Cloud Ready
Explain Cloud Service Management Operations
Describe the IBM Bluemix Garage Method
Define and identify the common components of a DevOps Tool chain
Describe the key architectural decisions made to adopt DevOps
Describe the concepts of Software Defined Environments

Section 4 – Open Standards, Open Source & Other Common Components of DevOps
Identify tools for Build & Deploy
Identify other common tools and their uses
Describe common container technology
Explain the applicability of open standards for DevOps

Section 5 – IBM Solution for DevOps
Describe the IBM solutions for the THINK phase in DevOps
Describe the IBM solutions for the CODE phase in DevOps
Describe the IBM solutions for the DELIVER phase in DevOps
Describe the IBM solutions for the RUN phase in DevOps
Describe the IBM solutions for the MANAGE phase in DevOps
Describe the IBM solutions for the LEARN phase in DevOps
Describe the IBM solutions for the CULTURE phase in DevOps
Describe the IBM solutions for Security in DevOps
Describe the IBM solutions for transformation and connectivity in DevOps
IBM Certified Solution Advisor – DevOps V1

Job Role Description / Target Audience
An IBM Certified Solution Advisor – DevOps V1 is a person who can clearly explain the benefits and underlying concepts of DevOps, and has practical experience of implementing DevOps processes and solutions for clients. They can advise stakeholders on how to adopt DevOps, how to overcome barriers, and how to realize the business benefits of DevOps. They can also demonstrate how the leading industry, Open and IBM solution offerings can help customers realize these benefits.

Key areas of competency include:
Clearly articulate the benefits of DevOps for driving business agility and continuous innovation.
Advise stakeholders on how to remove barriers to the adoption of DevOps, and implement organizational change and continual process improvement.
Have a deep working experience of Continuous delivery (integration, delivery, collaboration, innovation) practices.
Experience of application development lifecycle, operational methods, SCM, version control and common tooling for cloud-ready and cloud-native application development.
Working knowledge of development, test automation and virtualization, deployment, and operational best practices.
Understand the IBM DevOps reference architecture patterns, and can apply them to DevOps solutions.
Recommend the best approach, tooling and consumption models (on premises / public / SaaS) across the IBM solution portfolio (and leading open toolchain components).

Recommended Prerequisite Skills

The following qualifications are requirements for success:
Advanced knowledge of DevOps principles, practices, and development approaches
Advanced experience of Continuous delivery (integration, delivery, collaboration, innovation) practices.
Working knowledge of the IBM DevOps Reference Architecture and associated adoption patterns.
Working knowledge of tooling and consumption models (on-premises / public / SaaS) from the IBM DevOps portfolio.
Working knowledge of the IBM DevOps solution offerings.

Requirements
This certification requires 1 test(s).


QUESTION 1
Which type of tests are designed to verify that security features such as authentication and logout work as expected?

A. build verification
B. network vulnerability
C. functional security
D. synthetic user

Answer: B


QUESTION 2
When long lived source control management (SCM) branches are merged, significant amounts of network can be required to resolve code conflicts. Which DevOps practice addresses this problem?

A. continuous integration
B. test-driven development
C. A/B testing
D. continuous deployment

Answer: B


QUESTION 3
What are two key metrics for cloud native applications? (Choose two.)

A. performance
B. stability
C. mean time between failures (MTBF)
D. first failure data capture (FFDC)
E. speed of change

Answer: A,E

Explanation: References:


QUESTION 4
Which volume of the Information Technology Infrastructure Library (ITIL) should be an integral part of every stage of the ITIL service management framework?

A. ITILService Design
B. ITIL Service Operations
C. ITIL Continual Service Improvement
D. ITIL Service Strategy

Answer: C

Explanation: References:


QUESTION 5
How does adopting DevOps help improve insight into the real value of applications?

A. by using mean time between failure (MTBF) metrics
B. by using customer feedback
C. by using usage statistics for cloud native applications
D. by using analytical analysis for return on investment (ROI) calculations

Answer: A

Click here to view complete Q&A of C5050-300 exam
Certkingdom Review
, Certkingdom C5050-300 PDF

 

MCTS Training, MCITP Trainnig

 

Best IBM C5050-300 Certification, IBM C5050-300 Training at certkingdom.com


Continue Reading

JN0-634 JNCIP-SEC Exam Objectives

This list provides a general view of the skill set required to successfully complete the specified certification exam. Topics listed are subject to change.

Application-Aware Security Services
Security Director Logging and Reporting
Sky ATP
Unified Threat Management (UTM)
Intrusion Prevention System (IPS)
Software Defined Secure Networks (SDSN)
User Firewall
Layer 2 Security

Application-Aware Security Services
Describe the concepts, operation, or functionality of AppSecure
Application identification
Custom Applications
Application Signatures
Application Tracking
AppQoS
AppFirewall
Given a scenario, demonstrate how to configure or monitor AppSecure

Security Director Logging and Reporting
Describe the concepts, operation, or functionality of Security Director logging and reporting
Security Director logging and reporting Installation
Security policy design and application
Analyzing data
Given a scenario, demonstrate how to configure or monitor Security Director logging and reporting information

Sky ATP
Describe the concepts, operation, or functionality of Sky ATP
Functions and processing flow
Analysis and actions
Monitoring and reporting
Given a scenario, demonstrate how to configure or monitor Sky ATP

Unified Threat Management (UTM)
Describe the concepts, operation, or functionality of UTM
Processing order
Content Filtering
Anti-virus
Anti-Spam
Web filtering
Given a scenario, demonstrate how to configure or monitor UTM functions

Intrusion Prevention System (IPS)
Describe the concepts, operation, or functionality of IPS
Processing order
Signatures
Policy
Templates
Given a scenario, demonstrate how to configure or monitor IPS functions

Software Defined Secure Networks (SDSN)
Describe the concepts, operation, or functionality of SDSN
SDSN fundamentals
Policy Enforcer
SDSN components
Given a scenario, demonstrate how to configure or monitor SDSN deployments

User Firewall

Describe the concepts, operation, or functionality of the user firewall
Integrated user firewall
User firewall implementation
Authentication sources
Given a scenario, demonstrate how to configure or monitor the user firewall

Layer 2 Security
Describe the concepts, operation, or functionality of Layer 2 security
Transparent mode
Mixed mode
Secure wire
MacSec
Given a scenario, demonstrate how to configure or monitor Layer 2 security


QUESTION 2 – (Topic 1)
In the IPS packet processing flow on an SRX Series device, when does application identification occur?

A. before fragmentation processing
B. after protocol decoding
C. before SSL decryption
D. after attack signature matching

Answer: A


QUESTION 4 – (Topic 1)
Click the Exhibit button.
user@host> monitor traffic interface ge-0/0/3
verbose output suppressed, use <detail> or <extensive> for full protocol decode
Address resolution is ON. Use <no-resolve> to avoid any reverse lookup delay.
Address resolution timeout is 4s.
Listening on ge-0/0/3, capture size 96 bytes
Reverse lookup for 172.168.3.254 failed (check DNS reachability). Other reverse lookup failures will not be reported.
Use <no-resolve> to avoid reverse lockups on IP addresses.
19:24:16.320907 In arp who-has 172.168.3.254 tell 172.168.3.1 19.24:17.322751 In arp who has 172.168.3.254 tell 172.168.3.1 19.24:18.328895 In arp who-has 172.168.3.254 tell 172.168.3.1
19.24:18.332956 In arn who has 172.168.3.254 tell 172.168.3.1
A new server has been set up in your environment. The administrator suspects that the firewall is blocking the traffic from the new server. Previously existing servers in the VLAN are working correctly. After reviewing the logs, you do not see any traffic for the new server.
Referring to the exhibit, what is the cause of the problem?

A. The server is in the wrong VLAN.
B. The server has been misconfigured with the wrong IP address.
C. The firewall has been misconfigured with the incorrect routing-instance.
D. The firewall has a filter enabled to block traffic from the server.

Answer: C


QUESTION 5 – (Topic 1)
Click the Exhibit button.
— Exhibit —
CID-0:RT: flow process pak fast ifl 71 in_ifp ge-0/0/5.0
CID-0:RT: ge-0/0/5.0:10.0.0.2/55892->192.168.1.2/80, tcp, flag 2 syn
CID-0:RT: find flow: table 0x5a386c90, hash 50728(0xffff), sa 10.0.0.2, da 192.168.1.2, sp 55892, dp 80, proto 6, tok 7
CID-0:RT: no session found, start first path. in_tunnel – 0x0, from_cp_flag – 0 CID-0:RT: flow_first_create_session
CID-0:RT: flow_first_in_dst_nat: in <ge-0/0/5.0>, out <N/A> dst_adr 192.168.1.2, sp 55892, dp 80
CID-0:RT: chose interface ge-0/0/5.0 as incoming nat if. CID-0:RT:flow_first_rule_dst_xlatE. DST no-xlatE. 0.0.0.0(0) to 192.168.1.2(80)
CID-0:RT:flow_first_routinG. vr_id 0, call flow_route_lookup(): src_ip 10.0.0.2, x_dst_ip 192.168.1.2, in ifp ge-0/0/5.0, out ifp N/A sp 55892, dp 80, ip_proto 6, tos 10
CID-0:RT:Doing DESTINATION addr route-lookup
CID-0:RT: routed (x_dst_ip 192.168.1.2) from LAN (ge-0/0/5.0 in 0) to ge-0/0/1.0, Next-hop: 172.16.32.1
CID-0:RT:flow_first_policy_searcH. policy search from zone LAN-> zone WAN (0x0,0xda540050,0x50)
CID-0:RT:Policy lkup: vsys 0 zone(7:LAN) -> zone(6:WAN) scope:0
CID-0:RT: 10.0.0.2/55892 -> 192.168.1.2/80 proto 6
CID-0:RT:Policy lkup: vsys 0 zone(5:Unknown) -> zone(5:Unknown) scope:0 CID-0:RT: 10.0.0.2/55892 -> 192.168.1.2/80 proto 6
CID-0:RT: app 6, timeout 1800s, curr ageout 20s CID-0:RT: packet dropped, denied by policy
CID-0:RT: denied by policy default-policy-00(2), dropping pkt CID-0:RT: packet dropped, policy deny.
CID-0:RT: flow find session returns error. CID-0:RT: —– flow_process_pkt rc 0x7 (fp rc -1) CID-0:RT:jsf sess close notify CID-0:RT:flow_ipv4_del_flow: sess , in hash 32 — Exhibit —
A host is not able to communicate with a Web server.
Based on the logs shown in the exhibit, what is the problem?

A. A policy is denying the traffic between these two hosts.
B. A session has not been created for this flow.
C. A NAT policy is translating the address to a private address.
D. The session table is running out of resources.

Answer: A


QUESTION 6 – (Topic 1)
Your management has a specific set of Web-based applications that certain employees are allowed to use.
Which two SRX Series device features would be used to accomplish this task? (Choose two.)
A. UserFW
B. IDP
C. AppFW
D. firewall filter

Answer: C


QUESTION 7 – (Topic 1)
You configured a custom signature attack object to match specific components of an attack:
HTTP-request
Pattern .*\x90 90 90 … 90
Direction: client-to-server
Which client traffic would be identified as an attack?

A. HTTP GET .*\x90 90 90 … 90
B. HTTP POST .*\x90 90 90 … 90
C. HTTP GET .*x909090 … 90
D. HTTP POST .*x909090 … 90

Answer: A
Reference: http://www.juniper.net/techpubs/en_US//idp/topics/task/configuration/intrusion-detection-prevention-signature-attack-object-creating-nsm.html

 

Click here to view complete Q&A of JN0-634 exam
Certkingdom Review
, Certkingdom JN0-634 PDF

 

MCTS Training, MCITP Trainnig

 

Best Juniper JN0-634 Certification, Juniper JN0-634 Training at certkingdom.com


Continue Reading

C2150-500 IBM Security Dynamic and Static Applications V2 Fundamentals

Test information:
Number of questions: 57
Time allowed in minutes: 120
Required passing score: 58%
Languages: English, French, Latin American Spanish, Portuguese (Brazil)

Related certifications:
IBM Certified Solution Advisor – Security Dynamic and Static Applications V2

Section 1 – Application Security (20%)
Given a scenario, differentiate between DAST, SAST, and/or IAST.
Identify key or necessary triage tasks for DAST and SAST.
Given a scenario, demonstrate various reporting tasks.
Given a scenario, explain continuous delivery tasks, i.e., defect tracking, integrating with SDLC.
Identify AppScan Source remediation tasks.
Given a scenario, identify common web application vulnerabilities.
Identify types of external references that AppScan tool provides.

Section 2 – Competitive Analysis (7%)
Identify the competitive position of AppScan from the perspective of the Gartner Magic Quadrant.
Identify the strengths of the AppScan offering.
Identify the benefits of using AppScan tools, rather than their alternatives.

Section 3 – IBM Security Portfolio (10%)
Given a scenario, identify how AppScan fits into the IBM security framework.
Given a scenario, identify how AppScan fits into the IBM mobile security framework.

Section 4 – Software Development Lifecycle (17%)
Identify ways to integrate AppScan into a build process.
Given a scenario, demonstrate ways to integrate AppScan into a build process.
Identify where blackbox and whitebox solutions fit into secure SDLC.
Given a scenario, explain common development platforms (Ex. Java, .NET, C/C++).
Given a scenario, demonstrate the extensibility of AppScan tools.
Identify the extensibility of AppScan tools.

Section 5 – AppScan Product Knowledge (21%)
Given a scenario, explain how components of the AppScan suite are used in different deployments.
Given a scenario, determine if AppScan can provide a solution.
Identify potential deployment architectures.
Identify supported AppScan development frameworks.
Identify the advantages, purposes, and offerings of integrating AppScann with security tools.

Section 6 – Mobile Security (11%)
Identify the common types of mobile vulnerabilities.
Identify the mobile support platform for AppScan Source and integration with IBM Worklight.

Section 7 – Business Drivers (6%)
Given a scenario, demonstrate how AppScan can solve common problems.
Given a scenario, explain how AppScan can impact a company’s budget.
Given a scenario, explain Application security compliance drivers.

Section 8 – Licensing (8%)
Identify the required license structure for each component in AppScan.
Given a scenario, identify the licenses required for a specific deployment.

IBM Certified Solution Advisor – Security Dynamic and Static Applications V2

Job Role Description / Target Audience
This entry level certification is for solution advisors that are able to identify opportunities and influence direction across the AppScan portfolio. They recommend education, influence key decision makers, are able to respond to RFPs & RFQs, and understand licensing and pricing.

These solution advisors understand application security and competitive analysis, have knowledge of the broader IBM Security protfolio and the software development cycle, have the AppScan product knowledge, and understand mobility security, business drivers and licensing.

This is a technical sales role (CTP/pre-sales engineer) certification.
To attain the IBM Certified Solution Advisor – Security Dynamic and Static Applications V2 certification, candidates must pass 1 test. To gain additional knowledge and skills, and prepare for this test based on the job role and test objectives, take the link to the test below, and refer to the Test Preparation tab.

Recommended Prerequisite Skills
Have static analysis skills:
Read and program code
Configure source code to compile (build) an application
Remediate trivial errors in Java and .net apps: low hanging fruits
Have dynamic analysis skills:
Understand the web application architecture
Produce high-level deployment architecture solutions.
Write technically.
Comfortable discussing technical concepts with developers.
Comfortable discussing business and financial concepts with managers and executives.

Requirements
This certification requires 1 test(s).

Test(s) required:
Test C2150-500 – IBM Security Dynamic and Static Applications V2 Fundamentals

The test:
contains questions requiring single and multiple answers. For multiple-answer questions, you need to choose all required options to get the answer correct. You will be advised how many options make up the correct answer.
is designed to provide diagnostic feedback on the Examination Score Report, correlating back to the test objectives, informing the test taker how he or she did on each section of the test. As a result, to maintain the integrity of each test, questions and answers are not distributed.

Click here to view complete Q&A of C2150-500 exam
Certkingdom Review
, Certkingdom C2150-500 PDF

 

MCTS Training, MCITP Trainnig

 

Best IBM C2150-500 Certification, IBM C2150-500 Training at certkingdom.com


Continue Reading

300-170 DCVAI Implementing Cisco Data Center Virtualization and Automation

Exam Number 300-170 DCVAI
Associated Certifications CCNP Data Center
Duration 90 minutes (60-70 questions)
Available Languages English

This exam tests a candidate’s knowledge of implementing data center infrastructure including virtualization, automation, Cisco Application Centric Infrastructure (ACI), ACI network resources, and, ACI management and monitoring.

Exam Description
The Implementing Cisco Data Center Virtualization and Automation (DCVAI) exam (300-170) is a 90-minute, 60–70 question assessment. This exam is one of the exams associated with the CCNP Data Center Certification. This exam tests a candidate’s knowledge of implementing Cisco data center infrastructure including virtualization, automation, Application Centric Infrastructure, Application Centric Infrastructure network resources, and Application Centric Infrastructure management and monitoring. The course, Implementing Cisco Data Center Virtualization and Automation v6 (DCVAI), helps candidates to prepare for this exam because the content is aligned with the exam topics.

The following topics are general guidelines for the content likely to be included on the exam. However, other related topics may also appear on any specific delivery of the exam. In order to better reflect the contents of the exam and for clarity purposes, the guidelines below may change at any time without notice.

1.0 Implement Infrastructure Virtualization 19%

1.1 Implement logical device separation

1.1.a VDC
1.1.b VRF

1.2 Implement virtual switching technologies

2.0 Implement Infrastructure Automation 16%

2.1 Implement configuration profiles

2.1.a Auto-config
2.1.b Port profiles
2.1.c Configuration synchronization

2.2 Implement POAP

2.3 Compare and contrast different scripting tools

2.3.a EEM
2.3.b Scheduler
2.3.c SDK

3.0 Implementing Application Centric Infrastructure 27%

3.1 Configure fabric discovery parameters

3.2 Implement access policies

3.2.a Policy groups
3.2.b Protocol policies
3.2.b [i[ LLDP, CDP, LCAP, and link-level
3.2.c AEP
3.2.d Domains
3.2.e Pools
3.2.f Profiles
3.2.f [i] Switch
3.2.f [ii] Interface

3.3 Implement VMM domain integrations

3.4 Implement tenant-based policies

3.4.a EPGs
3.4.a [i] Pathing
3.4.a [ii] Domains
3.4.b Contracts
3.4.b [i] Consumer
3.4.b [ii] Providers
3.4.b [iii] vzAny (TCAM conservation)
3.4.b [iv] Inter-tenant
3.4.c Private networks
3.4.c [i] Enforced/unenforced
3.4.d Bridge domains
3.4.d [i] Unknown unicast settings
3.4.d [ii] ARP settings
3.4.d [iii] Unicast routing

4.0 Implementing Application Centric Infrastructure Network Resources 25%

4.1 Implement external network integration

4.1.a External bridge network
4.1.b External routed network

4.2 Implement packet flow

4.2.a Unicast
4.2.b Multicast
4.2.c Broadcast
4.2.d Endpoint database

4.3 Describe service insertion and redirection

4.3.a Device packages
4.3.b Service graphs
4.3.c Function profiles

5.0 Implementing Application Centric Infrastructure Management and Monitoring 13%

5.1 Implement management

5.1.a In-band management
5.1.b Out-of-band management

5.2 Implement monitoring

5.2.a SNMP
5.2.b Atomic counters
5.2.c Health score evaluations

5.3 Implement security domains and role mapping

5.3.a AAA
5.3.b RBAC

5.4 Compare and contrast different scripting tools

5.4.a SDK
5.4.b API Inspector / XML

QUESTION 1
You have a Cisco Nexus 1000V Series Switch. When must you use the system VLAN?

A. to use VMware vMotion
B. to perform an ESXi iSCSI boot
C. to perform a VM iSCSI boot
D. to perform an ESXi NFS boot

Answer: A


QUESTION 2
Which option must be defined to apply a configuration across a potentially large number of switches in the most scalable way?

A. a configuration policy
B. a group policy
C. an interface policy
D. a switch profile

Answer: C


QUESTION 3
Which two options are benefits of using the configuration synchronization feature? (Choose two )

A. Supports the feature command
B. Supports existing session and port profile functionality
C. can be used by any Cisco Nexus switch
D. merges configurations when connectivity is established between peers O supports FCoE in vPC topologies

Answer: A,C

Click here to view complete Q&A of 300-170 exam
Certkingdom Review
, Certkingdom pdf torrent

MCTS Training, MCITP Trainnig

Best Cisco 300-170 Certification, Cisco 300-170 Training at certkingdom.com


Continue Reading

300-175 DCUCI Implementing Cisco Data Center Unified Computing

Exam Number 300-175 DCUCI
Associated Certifications CCNP Data Center
Duration 90 minutes (60-70 questions)
Available Languages English
Register Pearson VUE

This exam tests a candidate’s knowledge of implementing data center technologies including unified computing, unified computing maintenance and operations, automation, unified computing security, and unified computing storage.

Exam Description
The Implementing Cisco Data Center Unified Computing (DCUCI) exam (300-175) is a 90-minute, 60–70 question assessment. This exam is one of the exams associated with the CCNP Datacenter Certification. This exam tests a candidate’s knowledge of implementing Cisco data center technologies including unified computing, unified computing maintenance and operations, automation, unified computing security, and unified computing storage. The course, Implementing Cisco Data Center Unified Computing v6 (DCUCI), helps candidates to prepare for this exam because the content is aligned with the exam topics.

The following topics are general guidelines for the content likely to be included on the exam. However, other related topics may also appear on any specific delivery of the exam. In order to better reflect the contents of the exam and for clarity purposes, the guidelines below may change at any time without notice.

1.0 Implement Cisco Unified Computing 28%

1.1 Install Cisco Unified Computing platforms
1.1.a Stand-alone computing
1.1.b Chassis / blade
1.1.c Modular / server cartridges
1.1.d Server integration

1.2 Implement server abstraction technologies
1.2.a Service profiles
1.2.a [i] Pools
1.2.a [ii] Policies
1.2.a [ii].1 Connectivity
1.2.a [ii].2 Placement policy
1.2.a [ii].3 Remote boot policies
1.2.a [iii] Templates
1.2.a [iii].1 Policy hierarchy
1.2.a [iii].2 Initial vs updating

2.0 Unified Computing Maintenance and Operations 20%

2.1 Implement firmware upgrades, packages, and interoperability

2.2 Implement backup operations

2.3 Implement monitoring

2.3.a Logging
2.3.b SNMP
2.3.c Call Home
2.3.d NetFlow
2.3.e Monitoring session

3.0 Automation 12%

3.1 Implement integration of centralized management

3.2 Compare and contrast different scripting tools

3.2.a SDK
3.2.b XML

4.0 Unified Computing Security 13%

4.1 Implement AAA and RBAC

4.2 Implement key management

5.0 Unified Computing Storage 27%

5.1 Implement iSCSI

5.1.a Multipath
5.1.b Addressing schemes

5.2 Implement Fibre Channel port channels

5.3 Implement Fibre Channel protocol services

5.3.a Zoning
5.3.b Device alias
5.3.c VSAN

5.4 Implement FCoE

5.4.a FIP
5.4.b FCoE topologies
5.4.c DCB

5.5 Implement boot from SAN

5.5.a FCoE / Fiber Channel
5.5.b iSCSI

QUESTION 3 – (Topic 1)
Which two statements are true concerning authorization when using RBAC in a Cisco Unified Computing System? (Choose two.)

A. A locale without any organizations, allows unrestricted access to system resources in all organizations.
B. When a user has both local and remote accounts, the roles defined in the remote user account override those in the local user account.
C. A role contains a set of privileges which define the operations that a user is allowed to take.
D. Customized roles can be configured on and downloaded from remote AAA servers.
E. The logical resources, pools and policies, are grouped into roles.

Answer: C,E

QUESTION 4 – (Topic 1)
Which actions must be taken in order to connect a NetApp FCoE storage system to a Cisco UCS system?

A. Ensure that the Fibre Channel switching mode is set to Switching, and use the Fibre Channel ports on the Fabric Interconnects.
B. Ensure that the Fibre Channel switching mode is set to Switching, and reconfigure the port to a FCoE Storage port.
C. Ensure that the Fibre Channel switching mode is set to End-Host, and use the Ethernet ports on the Fabric interconnects.
D. Ensure that the Fibre Channel switching mode is set to Switching, and use the Ethernet ports on the Fabric Interconnects.

Answer: A

QUESTION 5 – (Topic 1)
Which two protocols are accepted by the Cisco UCS Manager XML API? (Choose two.)

A. SMASH
B. HTTPS
C. HTTP
D. XMTP
E. SNMP

Answer: A,E

QUESTION 6 – (Topic 1)
An Cisco UCS Administrator is planning to complete a firmware upgrade using Auto install. Which two options are prerequisites to run Auto Install? (Choose two.)

A. minor fault fixing
B. configuration backup
C. service profiles unmounted from the blade servers
D. time synchronization
E. fault suppression started on the blade servers

Answer: A,B

QUESTION 7 – (Topic 1)
Which two prerequisites are required to configure a SAN boot from the FCoE storage of a Cisco UCS system? (Choose two.)

A. The Cisco UCS domain must be able to communicate with the SAN storage device that hosts the operating system image.
B. A boot policy must be created that contains a local disk, and the LVM must be configured correctly.
C. There must be iVR-enabled FCoE proxying between the Cisco UCS domain and the SAN storage device that hosts the operating system image.
D. There must be a boot target LUN on the device where the operating system image is
located.
E. There must be a boot target RAID on the device where the operating system image is located.

Answer: C,D

Click here to view complete Q&A of 300-175 exam
Certkingdom Review
, Certkingdom pdf torrent

MCTS Training, MCITP Trainnig

Best Cisco 300-175 Certification, Cisco 300-175 Training at certkingdom.com

 


Continue Reading

C2150-210 IBM Security Identity Governance Fundamentals V5.1

Test information:
Number of questions: 47
Time allowed in minutes: 90
Required passing score: 58%
Languages: English, French, Latin American Spanish, Portuguese (Brazil)

Related certifications:
IBM Certified Associate – Security Identity Governance V5.1

Certifications (13%)
Define certification dataset and campaign�
Define signoff options
Define supervisor and reviewer activities
Define notification configuration�

Role Management (9%)
Define role structure
Publish role and define visibility
Consolidate role

Role Mining (15%)
Load Access Optimizer data
Create Role Mining session
Analyse statistics charts to identify candidate role
Analyse assignment map to identify candidate role
Analyse entitlement and user coverage to identify candidate role�
Leverage candidate role in IAG warehouse

Role Maintanence and Health (6%)
Identify unused roles
Retire role
Setup Role Certification campaign

Reporting (13%)
Identify standard report
Customize report layout
Configure scope visibility
customize query and add filter criteria
configure authorization to report for selected users

Separation of Duties (17%)
Define Business Activities
Define SoD Policy
Define Technical Transformation
Analyse Risk Violations
Define Mitigation Controls
Setup Risk Violation Certification Campaign

Installation (9%)
Prepare database server and schema
Configure virtual machine
Install virtual appliance
Configure database connections

Enterprise Integration (4%)
Identity ISIM and ISIG integration options
Identify supported connectors

ISIG Authorization Model (9%)
Define functional authorization for ISIG users
Restrict the data portion for a functional authorization
Define and use Attribute Groups

Access Request Management (9%)
Identify common process activities
Identify UI customization options
Review access request status

IBM Certified Associate – Security Identity Governance V5.1

Job Role Description / Target Audience
An IBM Certified Associate – Security Identity Governance V5.1 is an individual with entry level knowledge and experience with IBM Security Identity Governance V5.1 . This individual is knowledgeable about the fundamental concepts of IBM Security Identity Governance V5.1 through hands on experience. The associate should have an in-depth knowledge of the basic to intermediate tasks required in day-to-day use of IBM Security Identity Governance V5.1 . The individual should be able to complete these tasks with little to not assistance from documentation, peers or support.

Key Areas of Competency
IBM Security Identity Governance UI from an admin and end user perspective
Identify the key ISIG features
Understand the benefits of using ISIG for identity and access governance.

Recommended Prerequisite Skills
Working end user knowledge of IBM Security Identity Governance V5.1
Understand Identity Governance, Risk and Compliance (GRC) infrastructure such as audit, reporting, access
review, and certification.
Experience with role modeling and role mining
Experience with role healthcare and maintenance.
Understand the ISIG entitlement model and how to leverage it to build target application authorization models.
Understand the ISIG authorization model and access governance responsibilities.
Experience performing an RFP in the access governance space.
Understand business activity-based separation of duties modeling for better business and auditor readability.
Understand typical functionality of access request workflows such as manager approvals.

Requirements
This certification requires 1 test(s).

Click here to view complete Q&A of C2150-210 exam
Certkingdom Review
, Certkingdom C2150-210 PDF

 

MCTS Training, MCITP Trainnig

 

Best IBM C2150-210 Certification, IBM C2150-210 Training at certkingdom.com


Continue Reading

70-773 Analyzing Big Data with Microsoft R

Exam 70-773
Analyzing Big Data with Microsoft R

Published: January 3, 2017
Languages: English
Audiences: Data scientists
Technology Microsoft R Server, SQL R Services
Credit toward certification: MCP, MCSE

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. View video tutorials about the variety of question types on Microsoft exams.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

Do you have feedback about the relevance of the skills measured on this exam? Please send Microsoft your comments. All feedback will be reviewed and incorporated as appropriate while still maintaining the validity and reliability of the certification process. Note that Microsoft will not respond directly to your feedback. We appreciate your input in ensuring the quality of the Microsoft Certification program.

If you have concerns about specific questions on this exam, please submit an exam challenge.

If you have other questions or feedback about Microsoft Certification exams or about the certification program, registration, or promotions, please contact your Regional Service Center.

Read and explore big data
Read data with R Server
Read supported data file formats, such as text files, SAS, and SPSS; convert data to XDF format; identify trade-offs between XDF and flat text files; read data through Open Database Connectivity (ODBC) data sources; read in files from other file systems; use an internal data frame as a data source; process data from sources that cannot be read natively by R Server
Summarize data
Compute crosstabs and univariate statistics, choose when to use rxCrossTabs versus rxCube, integrate with open source technologies by using packages such as dplyrXdf, use group by functionality, create complex formulas to perform multiple tasks in one pass through the data, extract quantiles by using rxQuantile
Visualize data
Visualize in-memory data with base plotting functions and ggplot2; create custom visualizations with rxSummary and rxCube; visualize data with rxHistogram and rxLinePlot, including faceted plots

Process big data
Process data with rxDataStep
Subset rows of data, modify and create columns by using the Transforms argument, choose when to use on-the-fly transformations versus in-data transform trade-offs, handle missing values through filtering or replacement, generate a data frame or an XDF file, process dates (POSIXct, POSIXlt)
Perform complex transforms that use transform functions
Define a transform function; reshape data by using a transform function; use open source packages, such as lubridate; pass in values by using transformVars and transformEnvir; use internal .rx variables and functions for tasks, including cross-chunk communication
Manage data sets
Sort data in various orders, such as ascending and descending; use rxSort deduplication to remove duplicate values; merge data sources using rxMerge(); merge options and types; identify when alternatives to rxSort and rxMerge should be used
Process text using RML packages
Create features using RML functions, such as featurizeText(); create indicator variables and arrays using RML functions, such as categorical() and categoricalHash(); perform feature selection using RML functions

Build predictive models with ScaleR
Estimate linear models
Use rxLinMod, rxGlm, and rxLogit to estimate linear models; set the family for a generalized linear model by using functions such as rxTweedie; process data on the fly by using the appropriate arguments and functions, such as the F function and Transforms argument; weight observations through frequency or probability weights; choose between different types of automatic variable selections, such as greedy searches, repeated scoring, and byproduct of training; identify the impact of missing values during automatic variable selection
Build and use partitioning models
Use rxDTree, rxDForest, and rxBTrees to build partitioning models; adjust the weighting of false positives and misses by using loss; select parameters that affect bias and variance, such as pruning, learning rate, and tree depth; use as.rpart to interact with open source ecosystems
Generate predictions and residuals
Use rxPredict to generate predictions; perform parallel scoring using rxExec; generate different types of predictions, such as link and response scores for GLM, response, prob, and vote for rxDForest; generate different types of residuals, such as Usual, Pearson, and DBM
Evaluate models and tuning parameters
Summarize estimated models; run arbitrary code out of process, such as parallel parameter tuning by using rxExec; evaluate tree models by using RevoTreeView and rxVarImpPlot; calculate model evaluation metrics by using built-in functions; calculate model evaluation metrics and visualizations by using custom code, such as mean absolute percentage error and precision recall curves
Create additional models using RML packages
Build and use a One-Class Support Vector Machine, build and use linear and logistic regressions that use L1 and L2 regularization, build and use a decision tree by using FastTree, use FastTree as a recommender with ranking loss (NDCG), build and use a simple three-layer feed-forward neural network

Use R Server in different environments
Use different compute contexts to run R Server effectively
Change the compute context (rxHadoopMR, rxSpark, rxLocalseq, and rxLocalParallel); identify which compute context to use for different tasks; use different data source objects, depending on the context (RxOdbcData and RxTextData); identify and use appropriate data sources for different data sources and compute contexts (HDFS and SQL Server); debug processes across different compute contexts; identify use cases for RevoPemaR
Optimize tasks by using local compute contexts
Identify and execute tasks that can be run only in the local compute context, identify tasks that are more efficient to run in the local compute context, choose between rxLocalseq and rxLocalParallel, profile across different compute contexts
Perform in-database analytics by using SQL Server
Choose when to perform in-database versus out-of-database computations, identify limitations of in-database computations, use in-database versus out-of-database compute contexts appropriately, use stored procedures for data processing steps, serialize objects and write back to binary fields in a table, write tables, configure R to optimize SQL Server ( chunksize, numtasks, and computecontext), effectively communicate performance properties to SQL administrators and architects (SQL Server Profiler)
Implement analysis workflows in the Hadoop ecosystem and Spark
Use appropriate R Server functions in Spark; integrate with Hive, Pig, and Hadoop MapReduce; integrate with the Spark ecosystem of tools, such as SparklyR and SparkR; profile and tune across different compute contexts; use doRSR for parallelizing code that was written using open source foreach
Deploy predictive models to SQL Server and Azure Machine Learning
Deploy predictive models to SQL Server as a stored procedure, deploy an arbitrary function to Azure Machine Learning by using the AzureML R package, identify when to use DeployR


Question No : 1

Note: This question Is part of a series of questions that use the same or similar answer choice. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided In a question apply only to that question. You need to evaluate the significance of coefficient that are produced by using a model that
was estimated already.

Which function should you use?

A. rxPredict
B. rxLogit
C. Summary
D. rxLinMod
E. rxTweedie
F. stepAic
G. rxTransform
H. rxDataStep

Answer: D

Explanation: https://docs.microsoft.com/en-us/r-server/r/how-to-revoscaler-linear-model


Question No : 2

You need to build a model that looks at the probability of an outcome. You must regulate between L1 and L2. Which classification method should you use?

A. Two-Class Neural Network
B. Two-Class Support Vector Machine
C. Two-Class Decision Forest
D. Two-Class Logistic Regression

Answer: A

Click here to view complete Q&A of 70-773 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCP 70-773 Certification, Microsoft 70-773 Training at certkingdom.com


Continue Reading

70-773 Analyzing Big Data with Microsoft R

Exam 70-773
Analyzing Big Data with Microsoft R

Published: January 3, 2017
Languages: English
Audiences: Data scientists
Technology Microsoft R Server, SQL R Services
Credit toward certification: MCP, MCSE

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. View video tutorials about the variety of question types on Microsoft exams.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

Do you have feedback about the relevance of the skills measured on this exam? Please send Microsoft your comments. All feedback will be reviewed and incorporated as appropriate while still maintaining the validity and reliability of the certification process. Note that Microsoft will not respond directly to your feedback. We appreciate your input in ensuring the quality of the Microsoft Certification program.

If you have concerns about specific questions on this exam, please submit an exam challenge.

If you have other questions or feedback about Microsoft Certification exams or about the certification program, registration, or promotions, please contact your Regional Service Center.

Read and explore big data
Read data with R Server
Read supported data file formats, such as text files, SAS, and SPSS; convert data to XDF format; identify trade-offs between XDF and flat text files; read data through Open Database Connectivity (ODBC) data sources; read in files from other file systems; use an internal data frame as a data source; process data from sources that cannot be read natively by R Server
Summarize data
Compute crosstabs and univariate statistics, choose when to use rxCrossTabs versus rxCube, integrate with open source technologies by using packages such as dplyrXdf, use group by functionality, create complex formulas to perform multiple tasks in one pass through the data, extract quantiles by using rxQuantile
Visualize data
Visualize in-memory data with base plotting functions and ggplot2; create custom visualizations with rxSummary and rxCube; visualize data with rxHistogram and rxLinePlot, including faceted plots

Process big data
Process data with rxDataStep
Subset rows of data, modify and create columns by using the Transforms argument, choose when to use on-the-fly transformations versus in-data transform trade-offs, handle missing values through filtering or replacement, generate a data frame or an XDF file, process dates (POSIXct, POSIXlt)
Perform complex transforms that use transform functions
Define a transform function; reshape data by using a transform function; use open source packages, such as lubridate; pass in values by using transformVars and transformEnvir; use internal .rx variables and functions for tasks, including cross-chunk communication
Manage data sets
Sort data in various orders, such as ascending and descending; use rxSort deduplication to remove duplicate values; merge data sources using rxMerge(); merge options and types; identify when alternatives to rxSort and rxMerge should be used
Process text using RML packages
Create features using RML functions, such as featurizeText(); create indicator variables and arrays using RML functions, such as categorical() and categoricalHash(); perform feature selection using RML functions

Build predictive models with ScaleR
Estimate linear models
Use rxLinMod, rxGlm, and rxLogit to estimate linear models; set the family for a generalized linear model by using functions such as rxTweedie; process data on the fly by using the appropriate arguments and functions, such as the F function and Transforms argument; weight observations through frequency or probability weights; choose between different types of automatic variable selections, such as greedy searches, repeated scoring, and byproduct of training; identify the impact of missing values during automatic variable selection
Build and use partitioning models
Use rxDTree, rxDForest, and rxBTrees to build partitioning models; adjust the weighting of false positives and misses by using loss; select parameters that affect bias and variance, such as pruning, learning rate, and tree depth; use as.rpart to interact with open source ecosystems
Generate predictions and residuals
Use rxPredict to generate predictions; perform parallel scoring using rxExec; generate different types of predictions, such as link and response scores for GLM, response, prob, and vote for rxDForest; generate different types of residuals, such as Usual, Pearson, and DBM
Evaluate models and tuning parameters
Summarize estimated models; run arbitrary code out of process, such as parallel parameter tuning by using rxExec; evaluate tree models by using RevoTreeView and rxVarImpPlot; calculate model evaluation metrics by using built-in functions; calculate model evaluation metrics and visualizations by using custom code, such as mean absolute percentage error and precision recall curves
Create additional models using RML packages
Build and use a One-Class Support Vector Machine, build and use linear and logistic regressions that use L1 and L2 regularization, build and use a decision tree by using FastTree, use FastTree as a recommender with ranking loss (NDCG), build and use a simple three-layer feed-forward neural network

Use R Server in different environments
Use different compute contexts to run R Server effectively
Change the compute context (rxHadoopMR, rxSpark, rxLocalseq, and rxLocalParallel); identify which compute context to use for different tasks; use different data source objects, depending on the context (RxOdbcData and RxTextData); identify and use appropriate data sources for different data sources and compute contexts (HDFS and SQL Server); debug processes across different compute contexts; identify use cases for RevoPemaR
Optimize tasks by using local compute contexts
Identify and execute tasks that can be run only in the local compute context, identify tasks that are more efficient to run in the local compute context, choose between rxLocalseq and rxLocalParallel, profile across different compute contexts
Perform in-database analytics by using SQL Server
Choose when to perform in-database versus out-of-database computations, identify limitations of in-database computations, use in-database versus out-of-database compute contexts appropriately, use stored procedures for data processing steps, serialize objects and write back to binary fields in a table, write tables, configure R to optimize SQL Server ( chunksize, numtasks, and computecontext), effectively communicate performance properties to SQL administrators and architects (SQL Server Profiler)
Implement analysis workflows in the Hadoop ecosystem and Spark
Use appropriate R Server functions in Spark; integrate with Hive, Pig, and Hadoop MapReduce; integrate with the Spark ecosystem of tools, such as SparklyR and SparkR; profile and tune across different compute contexts; use doRSR for parallelizing code that was written using open source foreach
Deploy predictive models to SQL Server and Azure Machine Learning
Deploy predictive models to SQL Server as a stored procedure, deploy an arbitrary function to Azure Machine Learning by using the AzureML R package, identify when to use DeployR


Question No : 1

Note: This question Is part of a series of questions that use the same or similar answer choice. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided In a question apply only to that question. You need to evaluate the significance of coefficient that are produced by using a model that
was estimated already.

Which function should you use?

A. rxPredict
B. rxLogit
C. Summary
D. rxLinMod
E. rxTweedie
F. stepAic
G. rxTransform
H. rxDataStep

Answer: D

Explanation: https://docs.microsoft.com/en-us/r-server/r/how-to-revoscaler-linear-model


Question No : 2

You need to build a model that looks at the probability of an outcome. You must regulate between L1 and L2. Which classification method should you use?

A. Two-Class Neural Network
B. Two-Class Support Vector Machine
C. Two-Class Decision Forest
D. Two-Class Logistic Regression

Answer: A

Click here to view complete Q&A of MB2-709 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCP MB2-709 Certification, Microsoft MB2-709 Training at certkingdom.com


Continue Reading

C2090-930 IBM SPSS Modeler Professional v3

Test information:
Number of questions: 60
Time allowed in minutes: 90
Required passing score: 67%
Languages: English, Japanese

Related certifications:
IBM Certified Specialist – SPSS Modeler Professional v3

This test will certify that the successful candidate has the fundamental knowledge to participate as an effective team member in the implementation of IBM SPSS Modeler Professional analytics solutions.

SPSS Modeler Professional Functionality (10%)
Identify the purpose of each palette
Describe the use of SuperNodes
Describe the advantages of SPSS Modeler scripting

Business Understanding and Planning (10%)
Describe the CRISP-DM process
Describe how to map business objectives to data mining goals

Data Understanding (15%)
Describe appropriate nodes for summary statistics, distributions, and visualizations (for example, graph nodes, output nodes)
Describe data quality issues (for example, outliers and missing data)

Data Preparation (20%)
Describe methods for data transformation (for example, Derive node, Auto Data Prep node, Data Audit node and Filler node)
Describe how to integrate data (for example, Merge node and Append node)
Describe sampling, partitioning, and balancing data (for example, Sample node, Balance node and Partition node)
Describe methods for refining data (for example, Select node, Filter node and Aggregate node)

Modeling (20%)
Describe classification models (including GLM and regression)
Describe segmentation models
Describe association models
Describe auto modeling nodes
Demonstrate how to combine models using the Ensemble node

Evaluation and Analysis (15%)
Demonstrate how to interpret SPSS Modeler results (for example, using Evaluation node, Analysis node, and data visualizations)
Describe how to use model nugget interfaces

Deployment (10%)
Describe how to use Export nodes (tools for exporting data)
Identify how to score new data using models
Identify SPSS Modeler reporting methods

IBM Certified Specialist – SPSS Modeler Professional v3

Job Role Description / Target Audience
The candidate has knowledge of analytical solutions, understands IBM SPSS Modeler capabilities, has knowledge of the IBM SPSS Modeler data model, can apply consistent methodologies to every engagement and develop SPSS predictive models.

To achieve the IBM Certified Specialist – SPSS Modeler Professional certification, candidates must possess the skills identified under Recommended Prerequisite Skills, if any, and pass one (1) exam.

Upon completion of this technical certification the successful candidate shows having the fundamental knowledge to participate as an effective team member in the implementation of IBM SPSS Modeler Professional analytics solution.

Recommended Prerequisite Skills
The following are topics that are assumed before your test preparation and will not be tested on :
Database and ODBC concepts
Basic proficiency in statistical concepts
Knowledge of basic computer programming

QUESTION 1
You have collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. The column. Drug, is a character field that describes the medication. You need to find out which proportion of the patients responded to each drug.
Which node should be used?

A. Web node
B. Distribution node
C. Sim Fit node
D. Evaluation node

Answer: C


QUESTION 2
When describing data, which two nodes address value types? (Choose two.)

A. Data Audit node
B. Statistics node
C. Type node
D. Report node

Answer: A,C


QUESTION 3
How many stages are there in the CRISP-DM process model?

A. 4
B. 6
C. 8
D. 10

Answer: C


QUESTION 4
An organization wants to determine why they are losing customers.
Which supervised modeling technique would be used to accomplish this task?

A. PCA
B. QUEST
C. Apriori
D. Kohonen

Answer: C


QUESTION 5
You want to create a Filter node to keep only a subset of the variables used in model building, based on predictor importance.
Which menu in the model nugget browser provides this functionality?

A. File
B. Preview
C. View
D. Generate

Answer: C

Click here to view complete Q&A of C2090-930 exam
Certkingdom Review
, Certkingdom C2090-930 PDF

 

MCTS Training, MCITP Trainnig

 

Best IBM C2090-930 Certification, IBM C2090-930 Training at certkingdom.com

 


Continue Reading

C2090-913 Informix 4GL Development

Test information:
Number of questions: 90
Time allowed in minutes: 90
Required passing score: 78%
Languages: English

Related certifications:
IBM Certified Solutions Expert — Informix 4GL Developer

If you are a knowledgeable Informix 4GL Developer and are capable of performing the intermediate to advanced skills required to design and develop Informix database applications, you may benefit from this certification role.

Section 1 – Informix 4GL (18%)

Section 2 – Statements (28%)

Section 3 – Cursors and Memory (13%)

Section 4 – Creating a Help File: The mkmessage Utility (1%)

Section 5 – Creating a Report Driver (3%)

Section 6 – Defining Program Variables (3%)

Section 7 – Displaying Forms and Windows (4%)

Section 8 – Forms that use Arrays (4%)

Section 9 – Passing Values between Functions (6%)

Section 10 – procedural Logic (1%)

Section 11 – The REPORT Functions (3%)

Section 12 – The SQLCA Record (6%)

IBM Certified Solutions Expert — Informix 4GL Developer

Job Role Description / Target Audience
If you are a knowledgeable Informix 4GL Developer and are capable of performing the intermediate to advanced skills required to design and develop Informix database applications, you may benefit from this certification role.

To attain the IBM Certified Solutions Expert – Informix 4GL Developer certification, candidates must pass 1 test.

Recommended Prerequisite Skills
Significant experience as an Informix 4GL Developer.

 

 

 


 

QUESTION 1
Which parts of the DISPLAY ARRAY statement are always required?

A. ON KEY keywords
B. screen array name
C. program array name
D. END DISPLAY keywords
E. DISPLAY ARRAY keywords
F. BEFORE DISPLAY keywords

Answer: B,C,E

Explanation:

 


 

QUESTION 2
What can the arr_count() library function be used to determine?

A. the current position in the screen array
B. the current position in the program array
C. the number of elements in the screen array
D. the number of elements in the program array

Answer: D

Explanation:

 


 

QUESTION 3
Which features are unique to the INPUT ARRAY statement?

A. BEFORE/AFTER ROW clause
B. BEFORE/AFTER INPUT clause
C. BEFORE/AFTER FIELD clause
D. BEFORE/AFTER DELETE clause
E. BEFORE/AFTER INSERT clause

Answer: A,D,E

Explanation:

Explanation:

 

Click here to view complete Q&A of C2090-913 exam
Certkingdom Review
, Certkingdom C2090-913 PDF

 

MCTS Training, MCITP Trainnig

 

Best IBM C2090-913 Certification, IBM C2090-913 Training at certkingdom.com

 


Continue Reading

C2090-719 InfoSphere Warehouse V9.5

Test information:
Number of questions: 60
Time allowed in minutes: 90
Required passing score: 65%
Languages: English, Japanese

Related certifications:
IBM Certified Solution Designer – InfoSphere Warehouse V9.5

This certification exam certifies that the successful candidate has important knowledge, skills, and abilities necessary to perform the intermediate and advanced skills required to design, develop, and support InfoSphere Warehouse V9.5 applications.

Section 1 – Architecting Warehouse Solutions (15%)
Demonstrate knowledge of InfoSphere Warehouse architecture and components
Editions
Software Components (why/when to use)
Describe the InfoSphere Warehouse building life-cycle
Steps to build and deploy the application(s)

Section 2 – Implementation (Table Ready) (5%)
Describe hardware topologies
Given a scenario, demonstrate how to implement security considerations

Section 3 – Physical Data Modeling (15%)
Given a scenario, demonstrate knowledge of the modeling process and the Design Studio features used
Identify physical design methods
Compare and synchronize
Impact analysis
Components
Enhancing the model
Given a scenario, describe range/data partitioning considerations
When is it appropriate to use
Cost

Section 4 – Cubing Services (CS) (20%)
Demonstrate knowledge of Cubing Services components
Cube server
Design Studio
MQT administration
Given a scenario, describe CS tooling and access methods
Demonstrate knowledge of CS optimization advisor
Identify the steps in creating a CS OLAP cube
Metadata
Creation of cube model and cube
Demonstrate knowledge of CS administration
Deploying cubes to cube server
Deploying cubes across multiple servers
Caching

Section 5 – Data Mining/Unstructured Text Analytics (12%)
Given a scenario, demonstrate knowledge of data mining and unstructured text analytics in InfoSphere Warehouse V9.5
Given scenario, describe the InfoSphere Intelligent Miner methods and how to use them
The mining process
Modeling
Scoring
Visualization
Demonstrate how to use Design Studio to implement mining methods
Mining unstructured text data – what do you do with it after it is extracted
Describe the unstructured text analytic information extraction process
Using JAVA regular expressions
Dictionary

Section 6 – SQL Warehousing Tool (SQW) (20%)
Demonstrate knowledge of SQW components
Data flows
Control flows
Mining flows
Variables
Versioning
Describe SQW anatomy
Operators
Ports
Connectors
Given a scenario, describe the SQW debugging functions

Section 7 – Run-time Administration and Monitoring of the Warehouse (13%)
Identify the application preparation steps for deployment
Describe the InfoSphere Warehouse components managed by Admin console
Demonstrate knowledge of managing, monitoring, and scheduling processes in Admin console
Given a scenario, demonstrate knowledge of workload management and monitoring
Difference between workload and classes
Controlling types of queries
Performance Expert

IBM Certified Solution Designer – InfoSphere Warehouse V9.5

Job Role Description / Target Audience
This certification exam certifies that the successful candidate has important knowledge, skills, and abilities necessary to perform the intermediate and advanced skills required to design, develop, and support InfoSphere Warehouse V9.5 applications. Applicable roles include: Solutions Architect, Data Warehouse Developers, and Database Administrator (in a data warehousing environment)

Requirements
This certification requires 1 test(s).

Test(s) required:
Click on the link(s) below to see test details, test objectives, suggested training and sample tests.

Test C2090-719 – InfoSphere Warehouse V9.5

QUESTION 1
What are two reasons for a combination of database and front-end tool based analytic
architectures in a data warehouse implementation? (Choose two.)

A. Less data is moved across the network, making queries run faster.
B. The database can provide consistent analytic calculations and query speed for common queries.
C. The combination of architectures will ensure fast query performance.
D. Multidimensional queries cannot be processed in SQL by the database engine so it must be done using a front-end tool.
E. The front-end tool allows for additional and more complex algorithms specific to applications that use that tool.

Answer: B,E

Explanation:


QUESTION 2
After deploying an application, you might need to update it by making changes to one or more
data flows. Deploying changes to an existing application is called delta deployment. How do you
package changes using delta deployment?

A. Package only the operator or property that has changed.
B. Package the data flow that has changed.
C. Package the control flow.
D. Package all the items that were originally packaged and use the same profile that was used.

Answer: C

Explanation:


QUESTION 3
You are implementing a DB2 Workload Manager (WLM) schema to limit the number of load
utilities that can execute concurrently. Which WLM object would be used to accomplish this?

A. work class with an associated work action and an appropriate threshold
B. workload with an associated service class and an appropriate threshold
C. work class with an associated service class and an appropriate threshold
D. workload with an associated work action and an appropriate threshold

Answer: A

Explanation:


QUESTION 4
Several operators are defined and linked together in DataFlow1. Another set of operators make up
DataFlow2. A control flow is defined and both DataFlow1 and DataFlow2 are used. You require
that DataFlow1 dynamically change the variable values used in DataFlow2. How can you fulfill this
requirement?

A. The inherent design of the SQL Warehouse Tool is that any variable value changed in one data
flow is accessible by any other data flow as long as the data flows are defined in the same warehouse project.
B. Using the File Export operator, DataFlow1 writes a file that contains updated variable values.
DataFlow2 accesses those updated variable values by reading that same file using an Import File operator.
C. When a control flow is executed, a run profile provides the initial values for all variables. Once
those values are set in the run profile, they are in affect for the entire execution of the control flow.
D. Using the File Export operator, DataFlow1 writes a file, containing updated variable values. A
variable assignment operator is then used to assign the values in the file to the appropriate
variables. DataFlow2 then has access to the updated variable values.

Answer: D

Explanation:


QUESTION 5
Relational database and a database model that is often a star or snowflake schema are
characteristics of which engine storage structure?

A. MOLAP
B. ROLAP
C. Multidimensional cubing
D. Proprietary

Answer: B

 

Click here to view complete Q&A of C2090-719 exam
Certkingdom Review
, Certkingdom C2090-719 PDF

MCTS Training, MCITP Trainnig

Best IBM C2090-719 Certification, IBM C2090-719 Training at certkingdom.com


Continue Reading

Follow Us

Bookmark and Share


Popular Posts