Search PPTs

Friday, July 26, 2013


Presentation On TESTING


TESTING Presentation Transcript:

Testing is the processes of executing the program with the intent of finding faults. Who should do this testing and when should it start are very important questions that are answered in the text.
As we know that software testing is the fourth phase of Software Development Life Cycle (SDLC). About 70% of development time is spent on testing. We explore this and many other interesting concepts in this chapter.

Please note that testing starts from requirements analysis phase only and goes till the last maintenance phase.
Static Testing wherein the SRS is tested to check whether it is as per user requirements or not.
Dynamic Testing starts when the code is ready or even a unit (or module) is ready.

The concept of software testing has evolved from simple program “check-out” to a broad set of activities that cover the entire software life-cycle.
There are five distinct levels of testing that are given below:
Debug: It is defined as the successful correction of a failure.
Demonstrate: The process of showing that major features work with typical input.
Verify: The process of finding as many faults in the application under test(AUT) as possible.
Validate: The process of finding as many faults in requirements, design and AUT.
Prevent: To avoid errors in development of requirements, design and implementation by self-checking techniques, including “test  before design”. 

It is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
It is the process of evaluating, reviewing, inspecting and doing desk checks of work products such as requirement specifications, design specifications and code.
It is human testing activity as it involves looking at the documents on paper.

“It is the process of evaluating a system or component during or at the end of development process to determine whether it satisfies the specified requirements. It involves executing the actual software. It is a computer based testing process.”

Testing starts right from the very beginning.
This implies that testing is everyone’s responsibility.
It is a Team Effort.
Even Developers are responsible.
They build the code but do not indicate any errors as they have written their own code.

Consider that there is a while loop that has three paths. If this loop is executed twice, we have (3*3) paths and so on. So, the total number of paths through such a code will be:

    = 1+3+(3*3)+(3*3*3)+….= 1+?3n        
This means an infinite number of test cases. Thus, testing is not 100% exhaustive.

According to Brian Marick, ”A test idea is a brief statement of something that should be tested.”
Cem Kaner said-”The best cases are the one that find bugs.”
A test case is a question that you ask of the program. The point of running the test is to gain information like whether the program will pass or fail the test. 

The pessimistic approach to stop testing is whenever some or any of the allocated resources- time, budget or test cases are exhausted.
The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost.


Presentation On TESTING LEVELS

TESTING LEVELS Presentation Transcript:

2.Black box  testing  focuses on software external  attributes and behavior. Such testing looks at an application’s expected behaviour from the users point.
White box testing/glass-box testing, however, tests software with knowledge of internal data structures,  physical logic, flow, and architecture at the source code level.
 White box testing looks at testing from the developer’s point of view.
Both black-box and white- box testing are critically important complements of a complete testing effort.
Individually, they do not allow for balanced testing. Black box testing can be less effective at  uncovering certain error types as data- flow errors or boundary condition errors at the source level. White box testing does not readily highlight macro level quality  risks in operating environmental comp ability, time-related errors and usability.  

3.When we talk of levels of testing then we are actually talking of three levels of testing:
(a) Unit Testing
(b) Integration Testing, and
(c) System Testing
Generally, system testing is functional rather than structural testing, we shall now study these testing techniques one by one.

4.Unit testing – “is the process of taking a module and running it in isolation from the rest of the software product by using prepared test cases and comparing the actual results with the results predicted by the specification and design module.”

5.A system is composed of multiple components or modules that comprise hardware and software. Integration is defined as the set of interactions among components. Testing the interaction between the modules and interaction with other systems externally is called as integration testing.
It is both a type of testing and a phase of testing. The architecture and design can give the details of interaction within systems. However, testing the interaction between one system and integration depends on many modules and system. This ensuing phase is called the integration testing phase.

6.Classification of Integration Testing

7.The goal of decomposition based integration is to test the interfaces among separately tested units.
Types of decomposition based techniques Top-Down integration Approach
Bottom Up Integration Approach
Sandwich Integration Approach
Big Bang Strategy
 Pros and cons of decomposition Based Techniques
Guidelines to choose integration Method and conclusions

8.One of the drawbacks of decomposition-based integration is that the basis is the functional decomposition tree. But if we use call graph based technique instead, we can remove this problem.
Also, we will move in the direction of structural testing. Because call graph is a directed graph and thus we can use it as a program graph also.

9.Pairwise Integration
The main idea behind pairwise integration is to eliminate the stub/driver development effort. The end result is what we have one integration test session for each edge in the call graph.
Neighborhood Integration
The neighbourhood of a node in a graph is the set of nodes that are one edge away from the given node. 

10.Pros and cons
the call graph based integration techniques move away from a purely structural basis toward behavioral matches well with the developments characterized by builds.

PPT On Text

Presentation On Text

Text Presentation Transcript: 

2.Words and symbols in any form, spoken or written, are the most common system of communication
Multimedia Authors weave words, symbols, sounds & images and then blend text into the mix to create integrated tools & interfaces for acquiring, displaying messages & data.
GO BACK is more powerful then Previous, TERRIFIC is better then That Answer was correct.

TYPEFACE: is a family of characters that usually includes many type sizes & styles
FONT: collection of characters of single size and style belonging to a particular typeface family
Type sizes are usually expressed in points, one point is .0138 inch
Font size is the distance from the top of the capital letters to the bottom of the descenders in letters such as g and y.
Font size doesn’t exactly describe the height or width of character.

4.Character Metrics are the general measurements applied to individual characters
Kerning: is the spacing between the character pairs
Tracking: is the process of adjusting spacing between the characters
Body width of each character can be regular, condensed or expanded

Case sensitive
Case insensitive
Intercap: Placing an uppercase letter in the middle of a word

Simplest way to categorize a typeface
Serif is the little decoration at the end of a letter stroke eg Times.
Sans means without
Verdana, Arial are sans serif
Serif fonts are used for body text to guide the reader’s eye along the line of text.
Sans serif fonts are used for headlines and bold statements

7.Designing with TextYour choice of font size and the number of headlines you place on a particular screen must be related both to the complexity of your message
Interactive website- large amount of text
Presentation-relevant matter

8.Choosing Text Fonts
Use as few different faces as possible
Use italics and bold styles where required
Proper line spacing
Varying the text size according to the importance of message
Ransom Note Typography: Using too many fonts on the same page.
Proper kerning

9.Proper effects of different colors and different background.
Anti-aliasing: blends the colors along the edges of the letters (dithering) to create a soft transition between letters and backgrounds
Using Drop Shadows
Use meaningful words for links and menu items

10.Menus for navigation
Buttons for interaction
Fields for reading
HTML documents
Animating Text
Symbols and icons
Font editing & Designing tools (Fontographer) 

PPT On Touchscreen

Presentation On Touchscreen

Touchscreen Presentation Transcript: 

2.What is a Touchscreen?
A touchscreen is an electronic visual display that can detect the presence and location of a touch within the display area.
The term generally refers to touching the display of the device with a finger or hand. Touchscreens can also sense other passive objects, such as a stylus.
 Touchscreens are common in devices such as game consoles, all-in-one computers, tablet computers, and smartphones.

3.How Does a Touchscreen Work?
A basic touchscreen has three main components:
Touch sensor;
Software driver.

      The touchscreen is an input device, so it needs to be combined with a display and a PC or other device to make a complete touch input system.

4.Touch Sensor
A touch screen sensor is a clear glass panel with a touch responsive surface. The touch sensor/panel is placed over a display screen so that the responsive area of the panel covers the viewable area of the video screen. There are several different touch sensor technologies on the market today, each using a different method to detect touch input. The sensor generally has an electrical current or signal going through it and touching the screen can cause a voltage or signal change. This change is used to determine the location of the touch to the screen.

The controller connects between the touch sensor and the PC. It takes information from the touch sensor and translates it into information that PC can understand. The controller determines what type of interface/connection you will need on the PC. Controllers are available that can connect to a Serial/COM port (PC) or to a USB port. Specialized controllers are also available that work with DVD players and other devices.

6.Software Driver
The driver allows the touchscreen and computer to work together. It tells the computer's operating system how to interpret the touch event information that is sent from the controller. Most touch screen drivers today are a mouse-emulation type driver. This makes touching the screen as same as clicking your mouse at the same location on the screen. This allows the touchscreen to work with existing software and allows new applications to be developed without the need for touchscreen specific programming.

7.Touchscreen Technology
Resistive touchscreen
Capacitive touchscreen
Infrared  touchscreen
Surface acoustic wave (SAW) touchscreen
Strain gauge touchscreen
Optical imaging touchscreen
Dispersive signal technology touchscreen

8.Resistive Touchscreen
A resistive touchscreen panel comprises several layers, the most important of which are two thin, transparent electrically-resistive layers (made with ITO) separated by a thin space. These layers face each other, with a thin gap between. The top screen (the screen that is touched) has a coating on the underside surface of the screen. Just beneath it is a similar resistive layer on top of its substrate. One layer has conductive connections along its sides, the other along top and bottom.

9.Resistive Touchscreen
Working: A voltage is applied to one layer, and sensed by the other. When an object, such as a fingertip or stylus tip, presses down on the outer surface, the two layers touch to become connected at that point: The panel then behaves as a pair of voltage dividers, one axis at a time. By rapidly switching between each layer, the position of a pressure on the screen can be read. 

10.Resistive Touchscreen
Advantages: High resistance to liquids and contaminants, Low cost.
Disadvantages: Risk of damage by sharp objects, poorer contrast due to having additional reflections from the extra layer of material placed over the screen. 


Presentation On VITAL LINK

VITAL LINK Presentation Transcript: 
1. VITAL LINK: A Requirement Management Tool

2. What is Requirement Management?
Requirements management is the process of ensuring that people are aware of requirements they have and do not have.
 A systematic approach to
And managing
the changing requirements of a software project.

3.Requirement Management Tool
A requirement management tool is one that facilitates requirement management.
Identification of "individual" requirements.
Assignment to a destination and sorting of requirements.
Requirement group (collection) revision identification .
Providing a basic data interface.

4.What is Vital Link ??
Vital Link is a Requirement management tool from Compliance Automation.
Built on Adobe FrameMaker.
Integration of word processor and database.

5.How it Works??
Vital Link utilizes a relational database which enables users to import existing documents from a variety of different word processors, and automatically parse the document which then can be ready for the user to edit, link entities, add attributes or generate reports.

6.How it Works??

7.Why It is Used?
Information stored in the vital link can be filtered and used to create reports & generate metrics.
Possible to create tables , graphics and complex mathematical formulae.

Works on Multiuser Environment and supports both requirement documentation and requirement management.
Supports multiple project at one time.

Traceability is provided using links.
provide reporting  facility.
Since it uses adobe framemaker all type of document can be created.




2.The code works according to the functional requirements.
The code has been written in accordance with the design developed earlier in the project life cycle.
The code for any functionality has been missed out.
The code handles errors properly. 

3.In dynamic testing, we test a running program. So, now binaries and executables are desired. We try to test the internal logic of the program now. It entails running the actual product against some pre designed test cases to exercise as much of the code as possible.

4.At the initial stages, the developer or tester can perform certain tests based on the input variables and the corresponding expected output variables. This can be a quick test. If we repeat these tests for multiple values of input variables also then the confidence level of the developer to go to the next level increases.
For complex modules, the tester can insert some print statements in between to check whether the program control passes through all statements and loops. It is important remove the intermediate print statements after the defects are fixed.
Another method is to run the product under a debugger or an integrated development environment (IDE). These tools involve single stepping of instructions, setting break points at any function or instruction.

5.Code coverage testing involves designing and executing test cases and finding out the percentage of code that is covered by testing.
The percentage of code covered by a test is found by adopting a technique called as the instrumentation of code.
These tools rebuild the code, do product linking with a set of libraries provided by the tool, monitor the portions of code covered, reporting on the portions of the code that are covered frequently, so that the critical or most often portions of code can be identified.

6.Statement coverage refers to writing test cases that execute each of the program statements. We assume that “more the code covered, the better is the testing of the functionality”
If there are asynchronous exceptions in the code, like divide by zero, then even if we start a test case at the beginning of a section, the test case may not cover all the statements in that section. Thus, even in case of sequential statements, coverage for all statements may not be achieved.
A section of code may be entered from multiple points.

7.In path coverage technique, we split a program in to a number of distinct paths. A program or a part of a program can start from the beginning and take nay of the paths to its completion. The path coverage of a program may be calculated based on the following formula:-

8.Functions (like functions in C) are easier to identify in a program and hence it is easier to write test cases to provide function coverage.
Since functions are at higher level of abstraction than code, it is easier to achieve 100% function coverage.
It is easier to prioritize the functions for testing.
Function coverage provides a way of testing traceability, that is tracing requirements through design, coding and testing phases.
Function coverage provides a natural transition to black box testing.

9.Which of the paths are independent? If two paths are not independent, then we may be able to minimize the number of tests.
Is there any limit on the number of tests that must be run to ensure that all the statements have been executed at least once?

10.McCabe’s cyclomatic metric, V(G) of a graph G with n vertices and e edges is given by the formula :
V(G) = e-n+2


Presentation On XTie-RT


XTie-RT Presentation Transcript:

XTie-RT is a unique requirements management and analysis tool.
XTie-RT is the result of over 10 years of internal development and use within Teledyne Brown Engineering (TBE).
It features very fast access to requirements data and strong support for the requirements analysis process
XTie-RT tool provides ensures product quality and integrity by enforcing requirement traceability throughout a system development life cycle.

It manage critical programs in the areas of:  Proposal Management Requirements Capture Requirements Organization  Requirements Traceability  Quality Assurance Risk Analysis Requirements Validation.

4.XTIE Package
Xtie package consists of
Xtie Database
Basic Application
Client Application
The PC Version of the server product can support up to and including 64 simultaneous users.
 The Sun Version UNIX version of the server product can support up to and including 128 simultaneous users but does not include a Client Application.

5.Xtie Database
The XTie engine (XTie) is a hierarchical, special-purpose database server that is specifically designed to support the analysis and recovery of related textual data in separate databases.
The XTie database engine has an “open” interface built on RTML™ (Requirements Text Markup Language)
User don’t need to define database structures

XTie allows the user to categorize and catalog information in three
different ways (Parent/child relationships, functional or object hierarchy, attributes). These
mechanisms can be used collectively or separately. Once defined, XTie can recover information
based on this cataloged information almost instantaneously.

It automates the mundane tasks associated with requirements analysis and management.
It is easy to learn. 
its application can be quickly integrated into any management structure.
This tool increases productivity and efficiency.

    Product packaging provides a unique set of benefits.
Users can use Client applications to set up a database network that ranges in size from a single user to as many users as desired
This packaging approach provides a low-cost capability that can be made available to everyone on a project.
Another  advantage  in packaging each product for a specific application (e.g., requirements tracing, compliance, etc.), the user can literally install the software and begin using it.

9.Unlike most database applications, the user does not have to define and build a specific database file configuration and
User doesn’t have to be database expert in order to obtain the full benefit of the product.

PPT ON IMAGE File Formats

Presentation On IMAGE File Formats

IMAGE File Formats Presentation Transcript: 

GIF (Graphics Interchange Format):
Commonly used to display graphics & images in HTML documents over the web & other online services and applications
Limited to 8 bit(256) color images
Efficiently compresses solid areas of color while preserving sharp details such as line art, logos
Supported by most of the browsers
Comes in two formats, the original is GIF87a & other is GIF89a that supports animation

Used to display graphics & images in HTML documents over the web & other online services and applications
Supports CMYK, RGB, Gray scale color models
Uses 24 bit format ,so retains all color information in an RGB image
Uses  powerful but lossy compression methods that produce file as much as ten times more  compressed than GIF
JPEG compression is slow

4.PNG (Portable Network Graphics)
Used for lossless compression
Supports 48 bit of color information and produces background transparency without sharp edges
Some browsers don’t support PNG image
PNG preserve transparency in grayscale and RGB
Supports RGB, grayscale, bitmap mode

5.TIFF (Tagged Image File Format)
Was designed to be a universal bitmapped image format
Used to exchange files between applications and computer platform
Supports pixel resolution of 48 bits
Supports different color models, RGB, CMYK, Grayscale images

6.Computer Color Model
Color Model is an abstract mathematical model describing the way colors can be represented
Colors are represented using a combination of red , green and blue.
Red + green=orange
Various Color Models are RGB, CMYK,HSB,CIE,YUV

7.RGB Color Model
It is additive method because color is created by adding the light sources in three primary colors
Name of the model comes from the initials of the primary colors
This model is used for TV and Computer Monitors
RGB is a device dependent color space

8.CMYK Color Model
Subtractive Method
Color is created by combining color media such as paints or ink that subtracts some parts of the color spectrum of the light and reflect the other back to eye
Used to create color in printing
The printed page is made up of tiny halftone dots of three primary colors C-cyan, M-Magenta, Y-yellow, B-Black

9.HSB Color Model
HSB Stands for Hue, Saturation, Brightness
According to this model, any color is represented by three numbers
First Number is Hue, its value ranges from 0 to 360 degrees, each degree represents a different color
First there is Red Color(0 or 360 degree), then there are all other colors eg yellow at 120,Green at 180, blue at 240 degrees upto violet.

10.Comparison of various Color Models

PPT On Introduction To ANDROID

Presentation On Introduction To ANDROID

Introduction To ANDROID Presentation Transcript: 

Android is a Linux-based operating system designed primarily for touchscreen mobile devices such as smartphones and tablet computers.

Initially developed by Android, Inc., which Google backed financially and later bought in 2005, Android was unveiled in 2007 along with the founding of the Open Handset Alliance: a consortium of hardware, software, and telecommunication companies devoted to advancing open standards for mobile devices. The first Android-powered phone was sold in October 2008.


4.Android consists of a kernel based on Linux kernel version 2.6 and, from Android 4.0 Ice Cream Sandwich onwards, version 3.x, with middleware, libraries and APIs written in C, and application software running on an application framework which includes Java-compatible libraries based on Apache Harmony.
Android uses the Dalvik virtual machine with just-in-time compilation to run Dalvik 'dex-code' (Dalvik Executable), which is usually translated from Java bytecode.The main hardware platform for Android is the ARM architecture.

Android has a growing selection of third party applications, which can be acquired by users either through an app store such as Google Play or the Amazon Appstore, or by downloading and installing the application's APK file from a third-party site.

The Play Store application allows users to browse, download and update apps published by Google and third-party developers, and is pre-installed on devices that comply with Google's compatibility requirements.

The app filters the list of available applications to those that are compatible with the user's device, and developers may restrict their applications to particular carriers or countries for business reasons. Purchases of unwanted applications can be refunded within 15 minutes of the time of download, and some carriers offer direct carrier billing for Google Play application purchases, where the cost of the application is added to the user's monthly bill.

As of September 2012, there were more than 675,000 apps available for Android, and the estimated number of applications downloaded from the Play Store was 25 billion.

Android applications run in a sandbox, an isolated area of the system that does not have access to the rest of the system's resources, unless access permissions are explicitly granted by the user when the application is installed. Before installing an application, the Play Store displays all required permissions: a game may need to enable vibration or save data to an SD card, for example, but should not need to read SMS messages or access the phonebook. After reviewing these permissions, the user can choose to accept or refuse them, installing the application only if they accept.

Android was built from the ground-up to enable developers to create compelling mobile applications that take full advantage of all a handset has to offer. It was built to be truly open. For example, an application can call upon any of the phone’s core functionality such as making calls, sending text messages, or using the camera, allowing developers to create richer and more cohesive experiences for users. Android is built on the open Linux Kernel.

Android breaks down the barriers to building new and innovative applications. For example, a developer can combine information from the web with data on an individual’s mobile phone — such as the user’s contacts, calendar, or geographic location — to provide a more relevant user experience. With Android, a developer can build an application that enables users to view the location of their friends and be alerted when they are in the vicinity giving them a chance to connect.

10.Android provides access to a wide range of useful libraries and tools that can be used to build rich applications. For example, Android enables developers to obtain the location of the device, and allows devices to communicate with one another enabling rich peer–to–peer social applications.

PPT On Animation

Presentation On Animation

Animation Presentation Transcript: 

2.Animation is the rapid display of a sequence of images of 2-D artwork or model positions in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a number of ways. The most common method of presenting animation is as a motion picture or video program, although several other forms of presenting animation also exist.

3.Animation Techniques
When you create an animation, organize its execution into a series of logical steps. First,
gather up in your mind all the activities you wish to provide in the animation; if it is
complicated, you may wish to create a written script with a list of activities and required
objects. Choose the animation tool best suited for the job. Then build and tweak your
sequences; experiment with lighting effects. Allow plenty of time for this phase when
you are experimenting and testing. Finally, post-process your animation, doing any
special rendering and adding sound effects.

4.Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each frame, which have been replaced today by acetate or plastic. Cels of famous animated cartoons have become sought-after, suitable-for-framing collector’s items.
Cel animation artwork begins with key frames (the first and last frame of an action). For example, when an animated figure of a man walks across the screen, he balances the weight of his entire body on one foot and then the other in a series of falls and recoveries, with the opposite foot and leg catching up to support the body.

5.Computer Animation
Computer animation programs typically employ the same logic and procedural concepts as cel animation, using layer, keyframe, and tweening techniques, and even borrowing from the vocabulary of classic animators.
On the computer, paint is most often filled or drawn with tools using features such as gradients and anti aliasing. The word links, in computer animation terminology, usually means special methods for computing RGB pixel values, providing edge detection, and layering so that images can blend or otherwise mix their colors to produce special transparencies, inversions, and effects.

It is the study of the movement and motion of structures that have joints , such as a walking man.
Inverse Kinematics is in high-end 3D programs, it is the process by which you link objects such as hands to arms and define their relationships and limits.
Once those relationships are set you can drag these parts around and let the computer calculate the result.

Morphing is popular effect in which one image transforms into another. Morphing application and other modelling tools that offer this effect can perform transition not only between still images but often between moving images as well.

8.Animation File Formats
Some file formats are designed specifically to contain animations and the can be ported among application and platforms with the proper translators.
 Director *.dir, *.dcr
AnimationPro *.fli, *.flc
 3D Studio Max *.max
SuperCard and Director *.pics
 CompuServe *.gif
 Flash *.fla, *.swf


Presentation On ASTRA SITE TEST

ASTRA SITE TEST Presentation Transcript:

Mercury Interactive's Astra SiteTest is a 'stress-testing' application for Web sites that lets administrators determine how a Web site will perform under a heavy load.
Astra LoadTest enables you to test standard Web objects and ActiveX controls.
Webmasters can generate millions of actual hits against a Web server, with point and click ease, to help ensure performance and reliability of Internet/intranet applications.
Web sites must be able to support millions of anticipated hits every day and deliver the performance required or it may discourage users from returning to a site and risk critical sales opportunities from site failure. Astra SiteTest helps Webmasters solve the problem of poor web site performance and failure.

3.Astra Site Test Continued…
Example: IBM Olympic site and the CNN Interactive Election Day site, had to almost completely terminate their services due to peak loads. These are prime examples where the Web server could not support the increased user load.
Thus, Astra SiteTest is used to let Web masters stress test their Web site easily, with minimal testing resources.

PLATFORM: The product runs on Windows 95 and Windows NT and works with any combination of Web-server peripherals and software.

Web User Generator component: It records each URL in a test script.
Controller Component: It begins a separate process or task for each simulated user and records the result and response time.
Analysis component: Users can view test results.
Virtual User Recorder: It records each step we perform and generates a test that graphically displays this step in an icon-based test tree.
Astra LoadTest Controller: Its used to run load tests and analyze the Web application’s performance under load.

5.Virtual User Recorder

6.Astra Load Test Controller
The Virtual User Recorder enables us to customize our test to accurately measure the performance of our Web application under load.
Some performance measuring factors can be:
Rendezvous Points
Run-time options

To measure the performance of the server, transactions are defined. A transaction represents a step or a set of steps that we want to measure.

8.Rendezvous Points: During the scenario run, we instruct multiple Vusers to perform tasks
simultaneously by creating a rendezvous point. This ensures that:
? intense user load is emulated
? transactions are measured under the load of multiple Vusers.
A rendezvous point is a meeting place for Vusers. When the rendezvous statement is interpreted, the Vuser is held by the Controller until all the members of the rendezvous arrive. When all the Vusers have arrived (or a time limit is reached), they are released together and perform the next task in their Vuser scripts.

9.Setting Run-Time Options:
Astra LoadTest run-time options affect how your test runs in a load testing scenario. The run-time settings are only used when load testing in the Controller.
Run-time Settings contain
     the following tabbed pages:

10.LOG: The Log tab options indicate what type of output messages Astra LoadTest should send to the output file.



AUTOMATED TESTING Presentation Transcript: 

2.Automated testing is automating the manual testing process. It is used to replace or supplement manual testing with a suite of testing tools.

Manual testing is used to document tests, produce test guides based on data queries, provide temporary structures to help run tests and measure the result of the test.

3.Consideration During Automated Testing
While performing testing with automated tools, the following points should be noted:
Clear and reasonable expectations should be established in order to know what can and what cannot be accomplished with automated testing in the organization.
There should be clear understanding of the requirements that should be met in order to achieve successful automated testing. This requires that the technical personnel should use the tools effectively.
The organization should have detailed, reusable test cases which contain exact expected results and a stand alone test environment with a restorable database.
Testing tool should be cost effective. The tool must ensure that test cases developed for manual testing are also useful for automated testing.

Since testing is of two types
Static testing
Dynamic testing
And also that the tools used during these testing are accordingly named as
Static testing tools.
Dynamic testing tools.

5.Static testing
Dynamic testing

The main problems with manual testing are listed below:
Not reliable : manual testing is not reliable as there is no yardstick available to find out whether the actual and expected result have been compared. We just rely on the tester’s words.
High Risk : a manual testing process is subject to high risks of oversights and mistakes.
Incomplete coverage: testing is quite complex when we have mix of multiple platforms, O.S servers, clients, channels, business processes etc.
Time consuming : limited test resources makes manual testing simply too time consuming. As per a study done, 90% of all IT project are delivered late due to manual testing.
Fact and Fiction : the fiction is that manual testing is done while the fact is only some manual testing is done depending upon the feasibility.

Automated testing is the process of automating the manual testing process. It is used to replace or supplement manual testing with a suite of testing tools. Automated testing tools assist software testers to evaluate the quality of the software by automating the mechanical aspects of the software testing task. The benefits of automation include increased software quality, improved time to market, repeatable test procedure and reduced testing costs.

Despite of many benefits, pace of test-automation is slow. Some of its disadvantages are given below:
An average automated test suite development is normally 3-5 times the cost of a complete manual test cycle.
Automation is too cumbersome. Who would automate? Who would train? Who would maintain? This complicates the matter.
In many organizations, test automation is not even a discussion issue.
There are some organizations where there is practically no awareness or only some awareness on test automation.
Automation is not an item of higher priority for managements. It does not make much difference to many organization.
Automation would require additional trained staff. There is no staff for the purpose.  

The skills required depends on what generation of automation the company is in.
Capture/playback and test harness tools (first generation)
Data driven tools (second generation)
Action driven (third generation)

Test automation is a partial solution and not a complete solution. One does not go in for automation because it is easy. It is painful and resource-consuming exercise but once it is done. It has numerous benefits. For example, developing a software to automate inventory management may be time-consuming, painful and costly resource-intensive exercise but once done, inventory management becomes relatively a breeze. 




The term black box refers to the software which is treated as a black box.
The system or source code is not checked at all. It is done from customer’s viewpoint.
The test engineer engaged in black box testing only knows the set of inputs and expected outputs and is unaware of how those inputs are transformed into outputs by the software.

3.Boundary Value Analysis (BVA)
It is a black box testing technique that believes and extends the concept that the density defect is more towards the boundaries. This is done to the following reasons:
Programmers usually are not bale to decide whether they have to use <= operator or < operator when trying to make comparisons.
Different terminating conditions of for-loops, while loops and repeat loops may cause defects to move around the boundary conditions.
The requirements themselves may not be clearly understood, especially around the boundaries thus causing even the correctly coded program to not perform the correct way.

4.What is BVA?
The basic idea of BVA is to use input variable values at their minimum, just above the minimum, a nominal value, just below their maximum and at their maximum.
BVA is based upon a critical assumption that is known as single fault assumption theory.
We derive the test cases on the basis of the fact that failures are not due to simultaneous occurrence of two faults. So, we derive test case by holding the values of all but one variable at their nominal values and letting that variable assume its extreme values.

5.Limitations of BVA
Boolean and logical variables present a problem for Boundary value analysis.
BVA assumes the variables to be truly independent which is not always possible.
BVA test cases have been found to be rudimentary because they are obtained with very little insight and imagination.

6.Before we generate the test case, firstly we need to give the problem domain:
Problem Domain: the triangle program accepts three integers, a, b and c as input. These are taken to be the sides of a triangle. The integers a, b and c must satisfy the following conditions: 

7.How to generate BVA Test Cases?
We know that our range is [1,200] where 1  is the lower bound and 200 being the upper bound also, we find that this program has three inputs – a, b and c. So, for our case

BVA yields (4n+1) test cases, so we can say that the total number of test cases will be (4=3+1) = 12 + 1 = 13.

We draw the table now which shows those 13 test-cases.

8.BVA test cases for triangle problem

The use of equivalence classes as the basis for functional testing has two motivations:
We want exhaustive testing and
We want to avoid redundancy.
This is not handled by BVA technique as we can see massive redundancy in the tables of test cases.
The idea of equivalence class testing is to identify test cases by using one element from each equivalence class.

Of all the functional testing method, those based on decision tables are the most rigorous because decision tables enforce logical rigour.

PPT On Bugzilla

Presentation On Bugzilla


Bugzilla Presentation Transcript:

2.What is a Bug?
A bug / software bug is an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways.

Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code.

A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy.

3.How bugs get into software?
Bugs are a consequence of the nature of human factors in the programming task.

They arise from oversights or mutual misunderstandings made by a software team during specification, design, coding, data entry and documentation.

For example, one might accidentally type a "<" where a ">" was intended, perhaps resulting in the words being sorted into reverse alphabetical order.

4.How to find bugs?
Finding and fixing bugs, or "debugging", has always been a major part of programming.

And usually, the most difficult part of debugging is finding the bug in the source code.

Code can be added so that messages or values can be written to a console (for example with printf in the C programming language) or to a window or log file to trace program execution or show values.


6.What is Bugzilla?
Bugzilla is a Web-based general-purpose bugtracker and testing tool.

Originally developed and used by the Mozilla project, and licensed under the Mozilla Public License.

It is free and open-source, and has many features its expensive counterparts lack.

Used, among others, by Mozilla Foundation, Wikimedia Foundation, WebKit, NASA, Yahoo!, GNOME, KDE, Red Hat and Novell.

7.Features of Bugzilla
Advanced Search Capabilities
New users can use a simple Google-like search for bugs while more advanced users can filter searched for very specific queries.

Email Notifications
Users can choose to be notified by email about any changes made to any bugs in Bugzilla.

File/Modify Bugs By Email
Users can send Bugzilla an email that will create a new bug, or will modify an existing bug.

Time Tracking
Users can display the time they think they will need to fix a bug, time spent on a bug, and deadline to fix the bug.

8.Strong Security
Bugzilla runs under Perl's "taint" mode to prevent SQL Injection, and has a very careful system in place to prevent Cross-Site Scripting.

Everything in Bugzilla is done using templates, from emails to the user interface. These templates are written in HTML, CSS, and Java Script so they are easy to edit.

Depending on the browser and language a user is connecting to Bugzilla from, they will be served in their language. This is great for global open source projects.

9.How to use Bugzilla?
Create a Bugzilla account
If you want to use Bugzilla, first you need to create an account.
Click the "Open a new Bugzilla account" link, enter your email address and, optionally, your name in the spaces provided, then click "Create Account" .
Within moments, you should receive an email to the address you provided above, which contains your login name (generally the same as the email address), and a password you can use to access your account. This password is randomly generated, and can be changed to something more memorable.
Click the "Log In" link in the yellow area at the bottom of the page in your browser, enter your email address and password into the spaces provided, and click "Login".
You are now logged in. Bugzilla uses cookies for authentication so, unless your IP address changes, you should not have to log in again.

10.Searching for Bugs
The Bugzilla Search page is is the interface where you can find any bug report, comment, or patch currently in the Bugzilla system. You can play with it here: .
The Search page has controls for selecting different possible values for all of the fields in a bug, as described above. Once you've defined a search, you can either run it, or save it as a Remembered Query, which can optionally appear in the footer of your pages.
Highly advanced querying is done using Boolean Charts, which have their own context-sensitive help.

PPT On Caliber Requirement Management Tool

Presentation On Caliber Requirement Management Tool

Caliber Requirement Management Tool Presentation Transcript:
1.Caliber – Requirement Management Tool

What is caliber tool?
Why is caliber tool used?
How is it used ?
A screenshot of the tool

3.What are requirement management tools ?
Requirements Management Tools assist organizations in defining and documenting requirements by allowing them to store requirements in a central location

4.What is caliber ?
It is a requirement management tool which can be customized to support many requirements processes.
It enables software teams to deliver on key project milestones with greater accuracy and predictability

5.What is Caliber ?
Caliber provides a Windows Explorer-like workplace for manipulating the hierarchical requirements tree, with requirement details accessible through a tabbed dialog on the right side of the screen.

  Project teams can then access the requirements to determine what is to be developed, and customers can access the requirements to ensure that their needs were correctly specified. This also aids the process of classifying and prioritizing requirements.

6.Why caliber?
   One central repository:  It provides a central, secure repository for all project requirements.
Requirements Analysis:  Spreadsheet views permit sorting and prioritizing requirements according to  cost and value.
Security: Centralized repository provides security, visibility and availability to all requirements data.

7.Requirements Traceability:
    It lets you link software requirements to a variety of artefacts across the lifecycle

Diverse client set: includes clients for a variety of users, such as Web, Eclipse, Microsoft® Visual Studio® (including Team System) and Windows.

Adaptability: Caliber RM adapts to fit your processes, bringing speed and agility to the software requirements management process

8.Features Integration with testing tools

End-to-end impact analysis: Multiple methods for visualizing traceability help users to understand immediately the scope of analysis necessary to gauge the impact of a specific change

9.How does caliber tool manage  requirements ?
Change Management
   For Each change  a unique history Record is created.

    Differences between the requirements between two versions of requirements can be easily spotted.

10.2)Online Publishing:

   Team members with a network connection access to requirements data

PPT On Computer Peripherals

Presentation On Computer Peripherals

Computer Peripherals Presentation Transcript:
1.Computer peripherals

A contemporary computer mouse, with the most common standard features: two buttons and a scroll wheel.

In computing, a mouse (plural mice, mouse devices, or mouses) is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of a small case, held under one of the user's hands, with one or more buttons.

It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features can add more control or dimensional input. The mouse's motion typically translates into the motion of a pointer on a display, which allows for fine control of a Graphical User Interface.

3.Mechanical mouse shown with the top cover removed.
Operating a mechanical mouse. 1: moving the mouse turns the ball. 2: X and Y rollers grip the ball and transfer movement. 3: Optical encoding disks include light holes. 4: Infrared LEDs shine through the disks. 5: Sensors gather light pulses to convert to X and Y velocities.

Bill English, builder of Engelbart's original mouse, invented the so-called ball mouse in 1972 while working for Xerox PARC. The ball-mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.

4.Keyboard (computing)

5.In computing, a keyboard is an input device partially modelled after the typewriter keyboard which uses an arrangement of buttons, or keys which act as electronic switches.

A keyboard typically has characters engraved or printed on the keys, and each press of a key typically corresponds to a single written symbol. However, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys or simultaneous key presses can produce actions or computer commands.

In normal usage, the keyboard is used to type text or numbers into a word processor, text editor, or other program. In a modern computer the interpretation of keypresses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all keypresses to the controlling software. Keyboards are also used for computer gaming, either with regular keyboards or by using special gaming keyboards which can expedite frequently used keystroke combinations.

6.Standard keyboards

Standard keyboards such as the 104-key Windows keyboards include alphabetic characters, punctuation symbols, numbers, and a variety of function keys. The internationally-common 102/105 key keyboards have a smaller 'left shift' key and an additional key with some more symbols between that and the letter to its right (usually Z or Y).

Keyboards with extra keys such as multimedia keyboards have special keys for accessing music, web, and other oft-used programs, a mute button, volume buttons or knob, and standby (sleep) button. gaming keyboards have extra function keys which can be programmed with keystroke macros. For example, ctrl+shift+y could be a keystroke that is frequently used in a certain computer game. Shortcuts marked on color-coded keys are used for some software applications and for specialized for uses including word processing, video editing, graphic design, and audio editing.

7.Floppy disk

8.A floppy disk is a data storage medium that is composed of a disk of thin, flexible ("floppy") magnetic storage medium encased in a square or rectangular plastic shell.

Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with "fixed disk drive", which is another term for an hard disk drive. Invented by IBM, floppy disks in 8-inch (200 mm), 5¼-inch (133? mm), and the newest and most common 3½-inch (90 mm) formats enjoyed many years as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s. They have now been superseded by flash and optical storage devices.

9.Hard disk drive

10.Hard disk drive
A hard disk drive (HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive, is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.

Originally, the term "hard" was temporary slang, substituting "hard" for "rigid", before these drives had an established and universally-agreed-upon name. A HDD is a rigid-disk drive although it is rarely referred to as such. By way of comparison, a floppy drive (more formally, a diskette drive) has a disc that is flexible. Some time ago, IBM's internal company term for a HDD was "file“.


Presentation On COQUALMO

COQUALMO Presentation Transcript:

COnstructive QUALity Model.
Formerly CODEFMO.
Estimation model.
Predicts number of residual defects/KSLOC or defects/FP.

3.Enables 'what-if' analyses.
Assessment of payoffs for quality investments.
Effects of personnel, project, product and platform characteristics.
Understanding of interactions amongst quality strategies.

4.Relationships between Costs, Schedule and Quality.
Balancing of these three factors.
Refining the estimate.
Tradeoff and Risk Analysis

Defects conceptually creep into the product.
Defect-elimination to improve product quality.
Based on “The Software Defect Introduction and Removal Model”.

6.Defects Introduction
Requirement Defects
Design Defects
Coding Defects

7.Automated Analysis
People Reviews
Execution Testing & Tools 

PPT On Critical Path Method

Presentation On Critical Path Method

Critical Path Method Presentation Transcript:
 1.Critical Path Method

What is it?
Why do need it ?
How it Works?
Critical Path
How to Identify Critical Path ?
Alternative method

3.CPM: What is it?
Algorithm for scheduling a set of project activities
Important tool for effective project management
Applied to any approach which is used to analyze a project network logic diagram
Provides a graphical view of the project
Predicts the time required to complete a project
Identifies activities critical to maintaining schedule

4.CPM: Why do we need it?
Allows to monitor achievement of project goals
Helps to see where remedial action needs to be taken to get a project back on course
Allows to prioritize activities for the effective management of project completion
Useful tool for scheduling the dependencies and      controlling a project
Efficient way of shortening time and scheduling project in given timeframe
Allows to shorten the planned critical path of a project by pruning critical path activities, by fast tracking and/or crashing the critical path

5.CPM: How it works?
Before using CPM, a project model is constructed that includes -
List of all activities required to complete the project
Time duration that each activity will take for completion
The dependencies between the activities
Using these values, CPM calculates –
Critical Path
Earliest and latest each activity can start and finish without making the project longer

6.CPM determines –
Critical Activities, i.e. cannot be delayed
Activities having float, i.e. can be delayed
Total float-delay without effecting the project completion date
Free float-delay without effecting the subsequent tasks

7.CPM: Critical Path
Sequence of project network activities which add up to the longest overall duration
Represents the shortest time needed to complete a project
 If an activity of this path is delayed, the project will be delayed
If project needs to be accelerated, the times for critical path activities should be reduced

8.CPM: How to identify critical path?
Can be identified by determining the following  parameters for each activity -
Earliest Start: earliest time at which an activity can start given that its preceding activities are completed
Earliest Finish: Earliest Start + Time taken to complete the activity
Latest Finish: latest time at which activity can be finished without delaying the project
Latest Start: Latest Finish – Time taken to complete the activity

9.CPM: How to identify critical path?
Slack Time: Represents the amount of time by which the activity can be delayed
Critical Path: Path consisting of all activities whose slack time is zero

10.CPM: Example

PPT On Cyber Crime

Presentation On Cyber Crime

Cyber Crime Presentation Transcript:
1.Cyber Crime

2.What is Cyber?
“Cyber” refers to imaginary space, which is created when the electronic devices communicate, like network of computers.

3.Cyber crime refers to anything done in the cyber space with a criminal intent.
These could be either the criminal activities in the conventional sense or could be activities, newly evolved with the growth of the new medium.
Cyber crime includes acts such as hacking, uploading obscene content on the Internet, sending obscene e-mails and hacking into a person's e-banking account to withdraw money.

4.Computer crime, or cybercrime, refers                        to any crime that involves a computer and a network, where the computers played an instrumental part in the commission of a crime.
The concept of cyber crime is not radically different from the concept of conventional crime. Both include conduct whether act or omission, which cause breach of rules of law and counterbalanced by the sanction of the state.

5.Reasons for Cyber Crime
Hart in his work “ The Concept of Law” has said ‘human beings are vulnerable so rule of law is required to protect them’. Applying this to the cyberspace we may say that computers are vulnerable so rule of law is required to protect and safeguard them against cyber crime. The reasons for the vulnerability of computers may be said to be:
a) Capacity to store data in comparatively small space-
The computer has unique characteristic of storing data                       in a very small space. This affords to remove or derive information either through physical or virtual medium makes it much more easier.

6.b) Easy to access-The problem encountered in guarding a computer system from unauthorized access is that, there is every                             possibility of breach not due to human error but due to              the complex technology. By secretly implanted logic bomb,                 key loggers that can steal access codes, advanced voice               recorders, retina imagers etc. that can fool biometric systems and                          bypass firewalls can be utilized to get past many a                    security system.

c) Complex-The computers work on operating systems and these operating systems in turn are composed of millions of codes. Human mind is fallible and it is not possible that there might not be a lapse at any stage. The cyber criminals take advantage of these lacunas and penetrate into the computer system.

7.d) Negligence-Negligence is very closely connected with human conduct. It is therefore very                                  probable that while protecting the                                  computer system there might be any                               negligence, which in turn provides a                                 cyber criminal to gain access and                                   control over the computer system.

e) Loss of evidence-Loss of evidence is a very common & obvious problem as all the data are routinely destroyed. Further collection of data outside the territorial extent also paralyses this system of crime investigation.

8.Unauthorized access to computer systems or networks / Hacking.

Theft of information contained in electronic form.

Email bombing.

9.Data diddling-This kind of an attack involves altering raw data just before a computer processes it and then changing it back after the processing is completed.

10.This kind of crime is normally prevalent in the financial institutions or for purpose of committing financial crimes. An important feature of this type of offence is that the alteration is so small that it would normally go unnoticed.


Presentation On DHTML And ASP

DHTML And ASP Presentation Transcript:

DHTML is NOT a language.
DHTML is a TERM describing the art of making dynamic and interactive web pages.
DHTML combines HTML, JavaScript, the HTML  and CSS.
"Dynamic HTML is a term used by some vendors to describe the combination of HTML, style sheets and scripts that allows documents to be animated."

Active Server Pages (ASPs) are Web pages that contain server-side scripts in addition to the usual mixture of text and HTML tags.
Server-side scripts are special commands you put in Web pages that are processed before the pages are sent from the server to the web-browser of someone who's visiting your website.

4.Before the server sends the Active Server Page to the browser, it runs all server-side scripts contained in the page.
Active Server Pages are given the ".asp" extension.

5.Server-side scripts typically start with <% and end with %>.
The <% is called an opening tag, and the %> is called a closing tag. In between these tags are the server-side scripts.
 You can insert server-side scripts anywhere in your webpage - even inside HTML tags.

6.Since the server must do additional processing on the ASP scripts, it must have the ability to do so.
The only servers which support this facility are Microsoft Internet Information Services & Microsoft Personal Web Server.


Hello, World !

Response.Write “Hello, World!”

As you can see above, we have enclosed a single line of VBScript within the opening and closing tags.
 It says,
Response.Write “Hello, World!”
This statement displays the string “Hello, World!” on the webpage.

9.Displaying the Date

Hello, World !

<%= Date %>
A variable is declared in VBScript using the Dim keyword.
 Dim myVar
In VBScript, all variables are variants. Their type is determined automatically by the runtime interpreter, and the programmer need not (and should not) bother with them.

PPT On Document Architecture

Presentation On Document Architecture

Document Architecture Presentation Transcript: 
1.Document Architecture

Exchanging documents entails exchanging the document content as well as the document structure. This requires that both documents have the same document architecture. The current standards in the document architecture are
1. Standard Generalized Markup Language

2. Open Document Architecture

The Standard Generalized Markup Language (SGML) was supported mostly by American publisher. Authors prepare the text, i.e., the content. They specify in a uniform way the title, tables, etc., without a description of the actual representation (e., script type and line distance). The publisher specifies the resulting layout.
The basic idea is that the author uses tags for marking certain text parts. SGML determines the form of tags. But it does not specify their location or meaning. User groups agree on the meaning of the tags.

4.SGML makes a frame available with which the user specifies the syntax description in an object-specific system. Here, classes and objects, hierarchies of classes and objects, inheritance and the link to methods (processing instructions) can be used by the specification. SGML specifies the syntax, but not the semantics.

5.For example,
Felix Gatou
This exceptional paper from Peter…
This example shows an application of SGML in a text document.

The Open Document Architecture (ODA) was initially called the Office Document Architecture because it supports mostly office-oriented applications. The main goal of this document architecture is to support the exchange, processing and presentation of documents in open systems. ODA has been endorsed mainly by the computer industry, especially in Europe.

7.Details Of ODA
The main property of ODA is the distinction among content, logical structure and layout structure. This is in contrast to SGML where only a logical structure and the contents are defined. ODA also defines semantics. Following figure shows these three aspects linked to a document. One can imagine these aspects as three orthogonal views of the same document. Each of these views represent on aspect, together we get the actual document.

8.Details Of ODA
The content of the document consists of Content Portions. These can be manipulated according to the corresponding medium.

9.A content architecture describes for each medium: (1) the specification of the elements, (2) the possible access functions and, (3) the data coding. Individual elements are the Logical Data Units (LDUs), which are determined for each medium.
The access functions serve for the manipulation of individual elements. The coding of the data determines the mapping with respect to bits and bytes.

10.ODA has content architectures for media text, geometrical graphics and raster graphics. Contents of the medium text are defined through the Character Content Architecture.
The Geometric Graphics Content Architecture allows a content description of still images. It also takes into account individual graphical objects.
Pixel-oriented still images are described through Raster Graphics Content Architecture. It can be a bitmap as well as a facsimile.

PPT On E Gas Sewa

Presentation On E Gas Sewa

E Gas Sewa Presentation Transcript: 
1.E Gas Sewa

E Gas Sewa is an application to provide an in-depth services for gas agency of Indian company highly benefiting their customers in vast area of company’s services providing to them, making it easy for the customers to take a gas connection , book gas and deal with all the problems related to it online.

Guest User

Customer Services:
Account Creation
Check status

5.Dealer Services:
 View Customer Orders
Customer Complaints

6.Admin Services:
Authority to add/delete customers.
Authority to add/delete dealers.
Authentication for new requests.
Management & Regulation.

7.Guest Users:
Check Product Rates.
Procedure to get a new connection.
Security Aspects of LPG Gas.


9.Microsoft Visual Studio 2008

Dream weaver

Microsoft SQL Server 2008

Secure access of confidential data(User’s detail).
24 X 7 Availability.
Better component design to get better performance at peak time.
Flexible service based architecture will be highly desirable for future extension.


Presentation On ESTIMATICS

ESTIMATICS Presentation Transcript:

2.What is Estimatics ?
Rubin has developed a proprietary software estimating model that utilizes gross business specifications for its calculations.  The model provides estimates of total development effort, staff requirements, cost, risk involved, and portfolio requirements, cost, risk involved, and portfolio effects. At present, the model addresses only the development portion of the software life cycle, ignoring the maintenance or post-deployment phase. The ESTIMACS model addresses three important aspects of software management—estimation, planning, and control .

3.Modules Of Estimatics
There are 5 main  MODULES of Estimatics :-

1. System development effort estimator

2. Staffing and cost estimator

3. Hardware configuration estimator

4. Risk Estimator

5. Portfolio analyzer
  They’ll be explained in detail now .

4.System development effort
This module requires responses to 25 questions regarding the system to be developed, development environment, etc. It uses a database of previous project data to calculate an estimate of the development effort.

5.Staffing and cost
Inputs required  are:  the  effort  estimation  from above, data on employee productivity, and salary for each skill level.  Again, a database of project information is used to compute the estimate of project duration, cost, and staffing required.

6.Hardware configuration
Inputs required are: information on the operating environment for the software product, total expected transaction vol- ume, generic application type, etc.  Out- put is an estimate of the required hard-
ware configuration.

7.Risk Estimator
This module calculates risk using answers to some 60 questions on project size, structure, and technology.  Some of the answers are computed automatically from other information already available.

8.Portfolio Analyzer
This module calculates risk using answers to some 60 questions on project size, structure, and technology.  Some of the answers are computed automatically from other information al- ready available.

The ESTIMACS system has been in use for only a short time.  In the future, Rubin plans to extend the model to include the maintenance phase of the software life cycle.   He claims that estimates of
the total effort are within 15% of actual values.

PPT On SPR Knowledge Plan

Presentation On SPR Knowledge Plan

SPR Knowledge Plan Presentation Transcript:

1.The Standish Group “Chaos” report shows:
 31.1% of projects will be cancelled before they ever get completed.
52.7% of projects will cost 189% of their original estimates.
The cost of these failures and overruns are just the tip of the iceberg. The lost opportunity costs are not measurable, but could easily be in the trillions of dollars.
Only 16.2% for software projects that are completed on-time and on-budget. .
Even when these projects are completed, many are no more than a mere shadow of their original specification requirements.
Projects completed by the largest American companies have only approximately 42% of the originally-proposed features and functions.

2.There is no standard estimation process
Project managers make use of their own tools, methods and processes to come up with estimates. These are therefore inconsistent and often dependant on the experience and knowledge of the Project Manager
There is no central repository for recording estimates
Assumptions therefore cannot be traced and this problem is compounded by turnover of project management staff.
Initial estimates that drive the initial budgeting process tend to be significantly lower then final budget/actuals.
History shows the Projects planned for the release miss their release dates. This will increase costs and project delivery timelines.
A large percentage of active projects have no baseline in place.

3.SPR History
Software Productivity Research (SPR) was founded in 1984 by Capers Jones, an internationally recognized consultant, speaker, author and seminar leader in the field of software management.
SPR is a worldwide provider of consulting services that enable organizations to compete more effectively through the predictable, on-time delivery of high-quality software by focusing on software estimation, measurement, and assessment. Their services help companies manage the software development process for maximum productivity, performance, and quality.
SPR’s clients include many of the Global 1000 companies, representing all major software environments including systems, IT, commercial, military, and government. They focus on capturing and analyzing the software practices of Best in Class organizations - those recognized for outstanding quality and service. In addition, they help software organizations achieve higher performance as they progress on the road to excellence.
Headquartered in Hendersonville, North Carolina, with representation throughout the US, South America, Europe, Asia and now Africa, SPR has unparalleled expertise in estimation, benchmarking, measurement, function point analysis, and process assessment. They have been providing superior products and services in these areas longer than any other company in the field today, and they use this experience effectively to enhance our clients' capabilities.

4.What is SPR KnowledgePLAN?
KnowledgePLAN is a parametric estimation tool that uses historical data about projects correlated to Function Point size to produce detailed, bottom-up (micro estimation) predictions of software projects

With SPR KnowledgePLAN you can:

Determine SIZE -  size your projects  using three possible sizing methods
Effort - Produce an estimate of effort, cost and resource required for a software project.
Estimate Quality - Predict the total number of defects that will be introduced during various stages of the project
Project Factors - Assess the influence of project factors such as product size and complexity, team skill sets, management style, tools, languages, methods, quality practices, and office environment
Scenario Play - SPR KnowledgePLAN allows a software project to explore the cost/value implications of additional resources, more powerful languages, development tools, improved methods and other technical changes.

5.SPR Knowledge Base

6.SPR KnowledgePLAN - Overview

7.Key stages to Estimation
How does KnowledgePLAN support a  4 stage estimation process

8.SPR support three different types of sizing methods.
This allows organizations to  align methods to specific inputs into estimation that become available through the Project Lifecycle
More than 14000 projects in the Knowledge Base

9.Estimation Stage - Complexity

PPT On FAQs during an Interview

FAQs during an Interview

FAQs during an Interview Presentation Transcript:
1.FAQs during an

2.Tell me about yourself
What do you know about this organization?
What are your team player qualities?
Why should I hire you?
Why do you want to work here?
How do you deal with pressure?
What relevant experience do you have?
Where else have you applied?

3.What's your biggest weakness?
 What's your biggest strength?
11. Let's talk about salary. What are you looking for?
12. What are your career goals?
13. Do you know anyone who works for us?
14. How long would you expect to work for us if hired?
15. What is your philosophy towards work?
Related Posts Plugin for WordPress, Blogger...

Blog Archive