The Dictionary object stores name/value pairs(referred to as the key and item respectively) in an array. The key is a unique identifier for the corresponding item and cannot be used for any other item in the same Dictionary object.
The following code creates a Dictionary object called “cars”, adds some key/item pairs, retrieves the item value for the key 'b' using the Item property and then outputs the outputting string to the browser.
<%
Dim cars
Set cars = CreateObject("Scripting.Dictionary")
cars.Add "a", "Alvis"
cars.Add "b", "Buick"
cars.Add "c", "Cadillac"
Response.Write "The value corresponding to the key 'b' is "
cars.Item("b")
%>
"The value corresponding to the key 'b' is Buick"
This code creates a Dictionary object called "cars", adds some key/item pairs, retrieves the item value for the key 'b' using the Item property and then outputs the resulting string to the browser.
Procedures are small logical components which breaks (splits) the program into a separate task. Procedures can be called inside of another procedure
Procedures can take its arguments, perform a series of operations and changes the values of its arguments.
The benefits of programming with procedures are:
The general format of the VBScript procedures are as follows:
Procedure_type Procedure_Name(argument_list)
the procedure heading
declaration statements
execution statements
End Procedure_type
<html>
<body>
<script language="vbscript" type="text/vbscript">
Function sayHello()
msgbox("Hello there")
End Function
</script>
</body>
</html>
A procedure can have two parts:
VBScript (Visual Basic Script) is developed by Microsoft with the intention of developing dynamic web pages. It is client-side scripting language like JavaScript. VBScript is a light version of Microsoft Visual Basic. The syntax of VBScript is very similar to that of Visual Basic. If you want your webpage to be more lively and interactive, then you can incorporate VBScript in your code. [2 Mark]
InStr: returns the first occurrence of the specified substring. Search happens from left to right.
Dim txt
txt = "This is a beautiful day!"
MsgBox InStr(txt, "beautiful")
InStrRev: returns the first occurrence of the specified substring. Search happens from Right to Left.
Dim txt
txt = "This is a beautiful day!"
MsgBox InStrRev(txt, "beautiful")
Left: returns a specified number of characters from the left side of a string.
Dim txt
txt = "This is a beautiful day!"
MsgBox Left(txt, 15)
Right: returns a specific number of characters from the right side of the string.
Dim txt
txt = "This is a beautiful day!"
MsgBox Right(txt, 10)
Split: returns a zero-based, one-dimensional array that contains a specified number of substrings.
Dim txt, splitText
txt = "This is a beautiful day!"
splitText = Split(txt)
For each x in txt
MsgBox x
Join: returns a string that consists of a number of substrings in an array.
Dim days
days = Array("Life", "Is", "Good")
MsgBox Join(days)
LCase: returns the lower case of the specified string.
UCase: returns the upper case of the specified string.
Trim: returns the string value after removing both leading and trailing blank spaces.
Date functions helps the developers to convert date from one format to another or to express the date value in the format that suits a specific condition.
Int: returns the integer part of the specified number.
Oct: returns the octal value of the given number.
Hex: returns the hexadecimal value of the given number.
Sqr: returns the square root of the specified number.
Abs: returns the absolute value of the specified number.
Log: returns the natural logarithm of a specified number. The natural logarithm is the logarithm to the base e.
Sin: returns the sine value of the specified number.
Cos: returns the cosine value of the specified number.
Tan: returns the tan value of the specified number.
Date functions helps the developers to convert date from one format to another or to express the date value in the format that suits a specific condition.
Date: returns the current system date.
CDate: converts a valid date and time expression to date object, and returns the result.
DateAdd: returns a date to which a specified time interval has been added.
DateDiff: returns the interval between two dates.
DatePart: returns the specified part of a given date.
IsDate: returns a boolean value which specifies if the given value can be converted to a date object.
Day: returns an integer between 1 and 31 that represents the day of the specified date.
Month: returns an integer between 1 and 12 that represents the month of the specified date.
Year: returns an integer that represents the year of the specified date.
During static testing, developers work to avoid potential problems that might arise later. Without executing the code, they perform manual or automated reviews of the supporting documents for the software, such as requirement specifications, looking for any potential ambiguities, errors or redundancies. The goal is to preempt defects before introducing them to the software system.
The next phase of software testing is unit testing. Unit Testing is a type of software testing where individual units or components of a software are tested. The purpose is to validate that each unit of the software code performs as expected. Unit Tests isolate a section of code and verify its correctness. A unit may be an individual function, method, procedure, module, or object. Unit testing is a WhiteBox testing technique that is usually performed by the developers during the development (coding phase) of an application.
Integration Testing is as a type of testing where software modules are integrated logically and tested as a group. A typical software project consists of multiple software modules, coded by different programmers. The purpose of this level of testing is to expose defects in the interaction between these software modules when they are integrated. Through integration testing, the developers can determine the overall efficiency of the units as they run together. This phase is important because the program's overall functionality relies on the units operating simultaneously as a complete system, not as isolated procedures.
In the system testing phase, the software undergoes its first test as a complete, integrated application to determine how well it carries out its purpose. For this, the developers pass the software to independent testers who had no involvement in its development in order to ensure that the testing results stem from impartial evaluations. System testing is vital because it ensures that the software meets the requirements as determined by the client.
Acceptance testing is the last phase of software testing. Its purpose is to evaluate the software's readiness for release and practical use. Testers may perform acceptance testing alongside individuals who represent the software's target audience. Acceptance testing aims to show whether the software meets the needs of its intended users and that any changes the software experienced during development are appropriate for use. Once the software passes acceptance testing, it moves on to production.
• A software bug is a problem causing a program to crash or produce invalid output. The problem is caused by insufficient or erroneous logic. A bug can be an error, mistake, defect, or fault, which may cause failure or deviation from expected results.
• Software bugs should be caught during the testing phase of the software development life cycle, but some can go undetected until after deployment.
• Debugging is a process of finding out where something went wrong and correcting the code to eliminate the errors or bugs that cause unexpected results. A software debugging system can provide tools for finding errors or bugs in programs and correcting them. [2 Mark]
One of the types of coding errors that can cause the program to output incorrectly or even freeze or crash. For example, an infinite loop would cause the program to repeat the sequence of actions indefinitely until it crashes or halts due to external intervention, such as the program closing or a power outage.
Calculation errors may occur due to bad logic, an incorrect formula, a data type mismatch or a coding error. Calculation errors can be serious. For example, a calculation error in a banking system can result in loss of money. Fixing the calculation error is typically just a matter of math.
This type of error occurs whenever software does not behave as intended. For example, a Login button doesn't allow users to login, an Add to cart button that doesn't update the cart, etc. So basically, any component in an app or website that doesn't function as intended is a functional bug.
Security errors are perhaps one of the most severe defects, making the project vulnerable. This type of bugs opens the software, company, and customers to a potential intense attack.
Such errors occur in the program's source code and prevent the program from compiling correctly. This type of defect is widespread and usually occurs when there are one or more missing or incorrect characters in the code. For example, missing one parenthesis can cause a syntax error.
These software bugs occur when an application does not run consistently across different hardware, operating systems, and browsers.
This is a type of software bug, related to speed, stability, response time, and software resource consumption. Such defects are discovered during the performance testing phase.
• Model-based testing is a software testing technique in which the test cases are derived from a model that describes the functional aspects of the system under test. It makes use of a model to generate tests that includes both offline and online testing. Model based testing is the modern approach to software testing.
• A model is a description of a system's behavior. Behavior can be described in terms of input sequences, actions, conditions, output and flow of data from input to output. It should be practically understandable and can be reusable; shareable must have a precise description of the system under test.
• Model-Based Testing describes how a system behaves in response to an action (determined by a model). Supply action, and see, if the system responds as per the expectation. It is a lightweight formal method to validate a system.
There are two types of Model based testing framework:
Offline / a priori: Generation of Test Suites before executing it.
Online / on-the-fly: Generation of Test Suites during test execution.
This model helps testers to assess the result depending on the input selected. There can be various combinations of the inputs which result in a corresponding state of the system. The system will have a specific state and current state which is governed by a set of inputs given from the testers.
It is an extension of Finite state machine and can be used for complex and real time systems. Statecharts are used to describe various behaviors of the system. It has a definite number of states. The behavior of the system is analyzed and represented in the form of events for each state.
Unified Modeling Language (UML) is a standardized general-purpose modeling language. UML includes a set of graphic notation techniques to create visual models of that can describe very complicated behavior of the system.
Complicated systems that process a lot of different, complicated transactions should have explicit representations of the transactions flows, or the equivalent.
Transaction flows are like control flow graphs, and consequently we should expect to have them in increasing levels of detail.
The system's design documentation should contain an overview section that details the main transaction flows.
Detailed transaction flows are a mandatory prerequisite to the rational design of a system's functional testing.
Transaction flow is sequence of steps for a system to be tested. The transaction flow begins at the preliminary design and continues as the project progresses. The following testing strategies has been taken.
Transaction flows are natural agenda for system reviews or inspections. In conducting the walkthroughs, you should:
Discuss enough transaction types to account for 98%-99% of the transaction the system is expected to process.
Discuss paths through flows in functional rather than technical terms.
Ask the designers to relate every flow to the specification and to show how that transaction, directly or indirectly, follows from the requirements.
Make transaction flow testing the cornerstone of system functional testing just as path testing is the cornerstone of unit testing.
Select additional flow paths for loops, extreme values, and domain boundaries.
Design more test cases to validate all births and deaths.
Publish and distribute the selected test paths through the transaction flows as early as possible so that they will exert the maximum beneficial effect on the project.
Select a set of covering paths (c1+c2) using the analogous criteria that were used for structural path testing.
Select a covering set of paths based on functionally sensible transactions, as you would for control flow graphs.
Try to find the most tortuous, longest, strangest path from the entry to the exit of the transaction flow.
Most of the normal paths are very easy to sensitize-80% - 95% transaction flow coverage (c1+c2) is usually easy to achieve.
The remaining small percentage is often very difficult.
Sensitization is the act of defining the transaction. If there are sensitization problems on the easy paths, then bet on either a bug in transaction flows or a design bug.
Instrumentation plays a bigger role in transaction flow testing than in unit path testing.
The information of the path taken for a given transaction must be kept with that transaction and can be recorded by a central transaction dispatcher or by the individual processing modules.
In some systems, such traces are provided by the operating systems or a running log.
• Data-flow testing is a white box testing technique that can be used to detect improper use of data values due to coding errors
• The primary purpose of dynamic data-flow testing is to uncover possible bugs in data usage during the execution of the code. To achieve this, test cases are created which trace every definition to each of its use and every use is traced to each of its definition. Various strategies are employed for the creation of the test cases.
The all-du-paths (ADUP) strategy is that every du path from every definition of every variable to every use of that definition be exercised under some test.
This strategy is the strongest data-flow testing strategy since it is a superset of all other data flow testing strategies. Moreover, this strategy requires greatest number of paths for testing.
The all uses (AU) strategy is that at least one definition clear path from every definition of every variable to every use of that definition be exercised under some test.
For every variable and every definition of that variable, include at least one definition free path from the definition to every predicate use; if there are definitions of the variable that are not covered, then add computational use test cases as required to cover every definition
In this testing strategy, for every variable, there is a path from every definition to every p-use of that definition. If there is a definition with no p-use following it, then a c-use of the definition is considered.
For every variable and every definition of that variable, include at least one definition free path from the definition to every computational use; if there are definitions of the variable that are not covered, then add predicate use test cases as required to cover every definition
In this testing strategy, for every variable, there is a path from every definition to every c-use of that definition. If there is a definition with no c-use following it, then a p-use of the definition is considered.
The all definitions strategy asks that every definition of every variable be covered by at least one use of that variable, be that use a computational use or a predicate use.
In this strategy, there is path from every definition to at least one use of that definition.
The all predicate uses (APU) strategy is derived from APU+C strategy by dropping the requirement of including a c-use if there are no p-use instances following the definition.
In this testing strategy, for every variable, there is path from every definition to every p-use of that definition. If there is a definition with no p-use following it, then it is dropped from contention.
The all computational uses (ACU) strategy is derived from ACU+P strategy by dropping the requirement of including a p-use if there are no c-use instances following the definition.
In this testing strategy, for every variable, there is a path from every definition to every c-use of that definition. If there is a definition with no c-use following it, then it is dropped from contention.
In tracing a path or path segment through a flow graph, you traverse a succession of link names. The name of the path or path segment that corresponds to those links is expressed naturally by concatenating those link names. For example, if you traverse links a, b, c and d along some path, the name for that path segment is abcd. This path name is also called a Path Product. The below images contains more examples.
The name of a path that consists of two successive path segments is conveniently expressed by the concatenation or Path Product of the segment names.
For example, if X and Y are defined as X = abcde, Y = fghij, then the path corresponding to X followed by Y is denoted by XY = abcdefghij. Similarly, YX = fghijabcde, aX = aabcde, Xa = abcdea, XaX = abcdeaabcde
If X and Y represent sets of paths or path expressions, their product represents the set of paths that can be obtained by following every element of X by any element of Y in all possible ways.
For example, if X and Y are defined as X = abc + def + ghi, Y = uvw + z. Then XY = abcuvw + defuvw + ghiuvw + abcz + defz + ghiz
If a link or segment name is repeated, that fact is denoted by an exponent. The exponent's value denotes the number of repetitions:
a1 = a, a2 = aa, a3 = aaa, an = aaaa . . . n times.
Similarly, if X = abcde then
X1 = abcde, x2 = abcdeabcde = (abcde)2, x3 = abcdeabcdeabcde = (abcde)3
The path product is not commutative
i.e: XY != YX
The path product is associative
i.e: A(BC) = (AB)C = ABC
• Path instrumentation is what we have to do to confirm that the outcome was achieved by the intended path.
• The above figure is an example of a routine that, for the (unfortunately) chosen input value (X = 16), yields the same outcome (Y = 2) no matter which case we select. Therefore, the tests chosen this way will not tell us whether we have achieved coverage. For example, the five cases could be totally jumbled and still the outcome would be the same. Path Instrumentation is what we have to do to confirm that the outcome was achieved by the intended path.
An interpretive trace program is one that executes every statement in order and records the intermediate values of all calculations, the statement labels traversed etc.
If we run the tested routine under a trace, then we have all the information we need to confirm the outcome and, furthermore, to confirm that it was achieved by the intended path.
Name every link by a lower case letter. Instrument the links so that the link's name is recorded when the link is executed. The succession of letters produced in going from the routine's entry to its exit should exactly correspond to the path name.
Instead of a unique link name to be pushed into a string when the link is traversed, we simply increment a link counter. We now confirm that the path length is as expected.
• Consider a pair of nodes in a graph and the set of paths between those nodes.
• From the above figure, the members of the path set can be listed as follows:
• Alternatively, the same set of paths can be denoted by :
• The + sign represents "OR" between the two nodes of interest. It means that either one of the paths can be taken.
• Any expression that consists of path names and "OR"s and which denotes a set of paths between two nodes is called a "Path Expression".
• The "Path Sum" denotes paths in parallel between nodes.
• In the above figure, links a, b and c, d are parallel paths and are denoted by a + b and c + d respectively. The set of all paths between nodes 1 and 2 can be thought of as a set of parallel paths and denoted by eacf + eadf + ebcf + ebdf.
• If a and b are sets of paths that lie between the same pair of nodes, then (a + b) denotes the UNION of those set of paths. Therefore, in the above flowgraph, the set of all paths is e(a + b)(c + d)f.
• In production of consumer goods and other products, every manufacturing stage is subjected to quality control and testing from start to final stage. If flaws are discovered at any stage, the product is either discarded or cycled back for rework and correction.
• Productivity is measured by sum of the costs of the material, rework, discarded components, quality assurance and testing.
• There is a trade-off between quality assurance costs and manufacturing costs: If sufficient time is not spent in quality assurance, the reject rate will be high and so will be the net cost. If inspection is good and all errors are caught as they occur, inspection costs will dominate, and again the net cost will suffer.
• Testing and Quality assurance costs for manufactured items can be as low as 2% in consumer products or as high as 80% in software products for space-ships, nuclear reactors, and aircrafts, where failures threaten life.
• The biggest part of software cost is the cost of bugs: the cost of detecting them, the cost of correcting them, the cost of designing tests that discover them, and the cost of running those tests.
• Testing and Test Design are parts of quality assurance and should also focus on bug prevention. A prevented bug is better than a detected and corrected bug.
• The main difference then between widget productivity and software productivity is that for hardware, quality is only one of several productivity determinants, whereas for software, quality and productivity are almost indistinguishable.
• State Transition Testing is basically a black box testing technique that is carried out to observe the behavior of the system or application for different input conditions passed in a sequence. In this type of testing, both positive and negative input values are provided and the behavior of the system is observed.
• State Transition Testing is basically used where different system transitions are needed to be tested.
• There are two main ways to represent or design state transition. They are State Transition Diagram and State Transition Table.
• State Transition Diagram shows how the state of the system changes on certain inputs. It is also called State Chart or Graph. It is useful in identifying valid transitions. It has four main components.
• Let's consider an ATM system function where if the user enters the invalid password three times the account will be locked. The below diagram represents that scenario.
• In the diagram whenever the user enters the correct PIN he is moved to Access granted state, and if he enters the wrong password he is moved to next try and if he does the same for the 3rd time the account blocked state is reached.
• In state transition table all the states are listed on the left side, and the events are described on the top. Each cell in the table represents the state of the system after the event has occurred. It is also called State Table. It is useful in identifying invalid transitions.
• Let's consider an ATM system function where if the user enters the invalid password three times the account will be locked. The below table represents that scenario.
• In the table when the user enters the correct PIN, state is transitioned to S5 which is Access granted. And if the user enters a wrong password he is moved to next state. If he does the same 3rd time, he will reach the account blocked state. All the valid states are listed on the left side and invalid states on the right side.
• Black Box Testing is a software testing method in which the functionalities of software applications are tested without having knowledge of internal code structure, implementation details and internal paths. Black Box Testing mainly focuses on input and output of software applications and it is entirely based on software requirements and specifications. It is also known as Behavioral Testing.
• It is used for validation. In this we ignore internal working mechanism and focus on what is the output?.
• White Box Testing is software testing technique in which internal structure, design and coding of software are tested to verify flow of input-output and to improve design, usability and security. In white box testing, code is visible to testers so it is also called Clear box testing, Open box testing, Transparent box testing, Code-based testing and Glass box testing.
• It is used for verification. In this we focus on internal mechanism i.e. how the output is achieved?.
Smoke Testing is a software testing process that determines whether the deployed software build is stable or not. Smoke testing is a confirmation for QA team to proceed with further software testing. It consists of a minimal set of tests run on each build to test software functionalities. Smoke testing is also known as “Build Verification Testing” or “Confidence Testing.
Multimedia refers to the computer-assisted integration of text, drawings, still and moving images(videos) graphics, audio, animation, and any other media in which any type of information can be expressed, stored, communicated, and processed digitally. We use multimedia to interact with the user.
Multimedia finds its application in various areas including, but not limited to, advertisements, art, education, entertainment, engineering, medicine, mathematics, scientific research etc. A few application areas of multimedia are listed below:
One of the main and widespread applications of multimedia can be seen in the entertainment Industry. Movies, ADs, Short clips are now being created using special effects and animations, like VFX. Multimedia is also used for gaming purposes which are distributed online or through CDs. These games also integrated various multimedia features.
Multimedia is used in education sector to create interactive educational materials, like books, PDF’s, videos, PowerPoint Presentation, etc along with one touch access to websites like Wikipedia and encyclopedia. Through virtual classrooms, teachers and students can avail the opportunity to learn, interact and exchange informative ideas without stepping outside and sit for hours inside a classroom. On top of everything, computer-based competitive as well as scholastics exams are being conducted globally only via the use of multimedia.
With the emergence of internet and its rapid spread across the world, traditional types of communication have become obsolete. Online video calling has become the new face of communication. Video platforms like Skype, Google Meet allow video chats that can happen between friends or can be used for conducting meetings between different heads of countries. Communication has been moulded as a matter of fraction of seconds, hence, now you can easily convey anything with just a few clicks. This has turned out to be a boon in situations of emergency, thus, it is known as one of the most beneficial applications of multimedia.
Multimedia is increasingly used by doctors to get trained by simply watching a surgery being done on a virtual platform. Simulation technology is used to develop human anatomy and study how it gets affected by different illnesses and then accordingly develop medicines and other remedial measures.
Online business has effectively replaced traditional ways of buying and selling. Simply, scrolling through online shopping sites like Amazon we see how text, pictures, and videos have been blended into making an appealing user interface. Through use of multimedia various companies offer interesting details of products to the prospective consumer who, simply through, a mobile phone buys and compares products online to check its suitability and price variances.
Most multimedia and web projects must be undertaken in stages. Some stages should be completed before other stages begin, and some stages may be skipped or combined. Here are the four basic stages in a multimedia project:
A project always begins with an idea or a need that you then refine by outlining its messages and objectives. Before starting to develop the multimedia project, it is necessary to plan what writing skills, graphic art, music, video, and other multimedia skills will be required. It is also necessary to estimate the time needed to prepare all elements of multimedia and prepare a budget accordingly. After preparing a budget, a prototype of the concept can be developed to demonstrate whether or not your idea is feasible.
Perform each of the planned tasks to create a finished product. During this stage, there may be many feedback cycles with a client until the client is happy. Under this stage, the various sub-stages are to be carried out.
Test your programs to make sure that they are bug free, meet the objectives of your project, work properly on the intended delivery platforms, and meet the needs of your client or end user.
The final stage of the multimedia application development is to pack the project and deliver the complete project to the end-user. This stage has several steps such as:
To transmit information to multimedia, we use various types of input devices. A few important input devices are explained below,
Keyboard:
The keyboard is a basic input device that is used to enter data
into a computer or any other electronic device by pressing keys.
It has different sets of keys for letters, numbers, characters,
and functions. Keyboards are connected to a computer through USB
or a Bluetooth device for wireless communication.
Mouse:
The mouse is a hand-held input device which is used to move cursor
or pointer across the screen. It is designed to be used on a flat
surface and generally has left and right button and a scroll wheel
between them. Laptop computers come with a touchpad that works as
a mouse. It lets you control the movement of cursor or pointer by
moving your finger over the touchpad. Some mouse comes with
integrated features such as extra buttons to perform different
actions.
Scanner:
The scanner uses the pictures and pages of text as input. It scans
the picture or a document. The scanned picture or document then
converted into a digital format or file and is displayed on the
screen as an output. It uses optical character recognition
techniques to convert images into digital ones.
Microphone:
The microphone is a computer input device that is used to input
the sound. It receives the sound vibrations and converts them into
audio signals or sends to a recording medium. The audio signals
are converted into digital data and stored in the computer. The
microphone also enables the user to telecommunicate with others.
Digital Camera:
It is a digital device as it
captures images and records videos digitally and then stores them
on a memory card. It is provided with an image sensor chip to
capture images, as opposed to film used by traditional cameras.
Besides this, a camera that is connected to your computer can also
be called a digital camera.
• Multimedia artist skills are abilities that can strengthen the outcome of your multimedia projects. To have multimedia skills means that you possess the expertise to use and manage various media formats to complete projects or work. A multimedia artist is a creative professional who uses technology and design methods to create original content they can present in an electronic format.
• There are various skills that a multimedia artist can acquire to help them succeed in their career. Here are four of the most common multimedia artist skills:
Creativity:
As artistic professionals, it's important for multimedia artists
to be creative to ensure that they are authoring original content.
They typically have a strong understanding of creative design
elements such as color theory and composition and they are often
aware of the latest design trends. Creativity is the foundation of
multimedia artistry.
Computer Literacy:
Computer literacy refers to the skills one has at their disposal
while using computer-based technology. Multimedia artists work
almost entirely on the computer, using a variety of software
programs for research and design purposes. Familiarity with
graphic design, photo correction, and video editing, all of which
are done using a computer, are important components for completing
many multimedia projects.
Design:
Along with creativity, having strong design skills are important
for multimedia artists to posses. Design skills can include
manipulation of colors and fonts, composing background music or
scripting a podcast, among many others.
Communication:
Like many other industries, multimedia design requires strong
communication amongst clients, artists and management. Multimedia
artists might correspond with their clients to ensure that they
are meeting all of their standards. They are also able to
communicate with clients and supervisors to negotiate and discuss
payment, deadlines or any other element vital to the completion of
a creative project.
The production for a high-end multimedia project requires team efforts. Different project requires different team members with different roles and responsibilities. But the common team members include Production manager, Designer, Script Writer, Audio Specialist and Programmer.
Project Manager:
Project Managers are responsible for planning, organizing, and
directing the completion of the multimedia projects for the
organization while ensuring these projects are on time, on budget,
and within scope. By overseeing complex projects from inception to
completion, project managers have the potential to shape an
organization's trajectory, helping to reduce costs, maximize
company efficiencies, and increase revenue.
Multimedia (Interface) Designer
Multimedia designers need variety of skills. They need to be able
to analyze content structurally and match it up with effective
presentation methods. They need to be an expert on different media
types, and capable media integrator, in order to create an overall
vision.
Script Writer:
Multimedia writers do everything writers of linear media do, and
more. They create character, action, and point of view - a
traditional scriptwriter's tools of the trade - and they also
crate interactivity. They write proposals, they script voice -
over and actors' narrations, they write text screens to deliver
messages, and they develop characters designed for an interactive
environment.
Audio Specialist:
The quality of audio elements can make or break a multimedia
project. Audio specialist are the wizards who make a multimedia
program come alive, designing and producing music, voice-over
narrations, and sound effects.
Multimedia Programmer: A multimedia programmer or software engineer integrates all the multimedia elements of a project into seamless whole using an authoring system or programming language. Multimedia programming functions range from coding simple displays of multimedia elements to controlling peripheral devices such as laserdisc players and managing complex timing, transitions, and record keeping.
The various components of multimedia are Text, Audio, Video, Graphics and Animation. All these components work together to represent information in an effective and easy manner.
Text
Text is the most common medium of representing the information.
Text appears in all multimedia creations in some kind. The text
can be in a variety of fonts and sizes to match the multimedia
software's professional presentation. Text in multimedia systems
can communicate specific information or serve as a supplement to
the information provided by the other media.
Audio
Sound is the most serious aspect of multimedia, delivering the joy
of music, special effects, and other forms of entertainment. Audio
files are used as part of the application context as well as to
enhance interaction. MP3, WMA, Wave, MIDI, and RealAudio are
examples of audio formats. The following programs are widely used
to play audio: Windows Media Player, Real Player, VLC Media Player
etc.
Video
The term video refers to a moving image that is accompanied by
sound, such as a television picture. It is the best way to
communicate with the audience. In multimedia it is used to make
the information more presentable and it saves a large amount of
time. The following programs are widely used to view videos:
Windows Media Player, Real Player, VLC Media Player etc.
Graphics
Graphics are at the heart of any multimedia presentation. The use
of visuals in multimedia enhances the effectiveness and
presentation of the concept. Pictures are more frequently used
than words to clarify concepts, offer background information, and
so on. Windows Picture, Internet Explorer, and other similar
programs are often used to see visuals. Adobe Photoshop is a
popular & widely used graphics editing program.
Animation
In computer, animation is used to make changes to the images so
that the sequence of the images appears to be moving pictures. An
animated sequence shows a number of frames per second to produce
an effect of motion in the user's eye. A presentation can also be
made lighter and more appealing by using animation. The following
are some of the widely used animation viewing programs: Fax
Viewer, Internet Explorer, etc.
A storage device is any type of computing hardware that is used for storing, porting or extracting data files and objects. Storage devices can hold and store information both temporarily and permanently. They may be internal or external to a computer, server or computing device.
Floppy Diskette: It is generally used on a personal computer to store data externally. A Floppy disk is made up of a plastic cartridge and secures with a protective case. Nowadays floppy disk is replaced by new and effective storage devices like USB, etc.
Hard Disk: It is a storage device (HDD) that stores and retrieves data using magnetic storage. It is a non-volatile storage device that can be modified or deleted n number of times without any problem. Most of the computers and laptops have HDDs as their secondary storage device.
It is a cheaper and portable storage device. It is the most commonly used device to store data because is more reliable and efficient as compare to other storage devices. Some of the commonly used flash memory devices are:
• Processsor
• Memory and Storage Devices
• Input Devices
• Output Devices
• Device Driver
• Media Players
• Media Conversion Software
• Multimedia Editing Software
• Multimedia Authoring Softaware
• Multimedia authoring is a process of assembling different types of media contents like text, audio, image, animations and video as a single stream of information with the help of various software tools available in the market.
• Multimedia authoring tools give an integrated environment for joining together the different elements of a multimedia production.
• It gives the framework for organizing and editing the components of a multimedia project. It enables the developer to create interactive presentation by combining text, audio, video, graphics and animation.
• In these authoring systems, elements are organized as pages of a book or a stack of cards. In the book or stack there are thousand of pages or cards available. These tools are best used when the bulk of your content consists of elements that can be viewed individually, for example the pages of a book or file cards in card file. You can jump from page to page because all pages can be interrelated. In the authoring system you can organize pages or cards in the sequences manner. Every page of the book may contain many media elements like sounds, videos and animations.
• One page may have a hyperlink to another page that comes at a much later stage and by clicking on the same you might have effectively skipped several pages in between. Some examples of card or page tools are:
Icon-based tools give a visual programming approach to organizing and presenting multimedia. First you build a structure or flowchart of events, tasks and decisions by dragging appropriate icons from a library. Each icon does a specific task, for example- plays a sound, open an image etc. The flowchart graphically displays the project's logic. When the structure is built you can add your content text, graphics, animation, video movies and sounds. A nontechnical multimedia author can also build sophisticated applications without scripting using icon based authoring tools. Some examples of icon based tools are:
Time based authoring tools allow the designer to arrange various elements and events of the multimedia project along a well defined time line. By time line, we simply mean the passage of time. As the time advances from starting point of the project, the events begin to occur, one after another. The events may include media files playback as well as transition from one portion of the project to another. The speed at which these transitions occur can also be accurately controlled. These tools are best to use for those projects, wherein the information flow can be directed from beginning to end much like the movies. Some example of Time based tools are:
• Object oriented authoring tools support environment based on object. Each object has the following two characteristics:
• In these systems, multimedia elements events are often treated as objects that live in a hierarchical order of parent and child relationships. These objects use messages passed among them to do things according to the properties assigned to them.
• For example, a video object will likely have a duration property i.e how long the video plays and a source property that has the location of the video file. This video object will likely accept commands from the system such as play and stop. Some examples of the object oriented tools are:
• In some multimedia projects it may be required to create special characters. There are several software that can be used to create customized font. Using these font editing tools it is possible to create special symbols, distinct text and display faces. These tools help an multimedia developer to communicate his idea or the graphic feeling in the way they want. Following are the list of software that can be used for editing and creating fonts:
• To make your text look pretty you need a toolbox full of fonts and special graphics applications that can stretch, shade, color and anti-alias your words into real artwork.
• Pretty text can be found in bitmapped drawings where characters have been tweaked, manipulated and blended into a graphic image.
Audio format defines the quality and loss of audio data. Based on application different type of audio format are used. Audio formats are broadly divided into three parts:
It is a form of compression that loses data during the compression process. But difference in quality is not noticeable to hear.
This method reduces file size without any loss in quality. But is not as good as lossy compression as the size of file compressed to lossy compression is 2 and 3 times more.
Image Format describes how data related to the image will be stored. Data can be stored in compressed, Uncompressed or vector format. Each format of the image have a different advantage and disadvantage.
TIFF(.tif, .tiff)
Tagged Image File Format (TIFF) store image data without any
compression or data-loss. This allows the image to have high
quality but the size of the image is also large, which is good for
professional printing.
JPEG (.jpg, .jpeg)
Joint Photographic Experts Group (JPEG) is a “lossy” format
meaning that the image is compressed to reduce the file size. The
compression does create a loss in quality but this loss is
generally not noticeable. JPEG files are very common on the
Internet and JPEG is a popular format for digital cameras - making
it ideal for web use and non-professional prints.
GIF (.gif)
Graphics Interchange Format (GIF) files are used for web
graphics.They can be animated and are limited to only 256 colors.
They can allow for transparency. GIF files are typically small is
size and are portable.
PNG (.png)
Portable Network Graphics (PNG) files are a lossless image format.
It was designed to replace gif format as gif supported 256 colors
unlike PNG which support 16 million colors.
Bitmap (.bmp)
Bit Map Image file is developed by Microsoft for windows. It is
same as TIFF due lossless, no compression property. Due to BMP
being a proprietary format, it is generally recommended to use
TIFF files.
EPS (.eps)
Encapsulated PostScript (EPS) file is a common vector file type.
EPS files can be opened in applications such as Adobe Illustrator
or CorelDRAW.
RAW Image Files (.raw, .cr2, .nef, .orf, .sr2)
These Files are unprocessed, created by a camera or scanner. Many
digital SLR cameras can shoot in RAW, whether it be a .raw, .cr2,
or .nef. Thes images are the equivalent of a digital negative,
meaning that they hold a lot of image information. These images
need to be processed in an editor such as Adobe Photoshop or
Lightroom. It saves metadata and is used for photography.
A video format is the container that stores audio, video, subtitles and any other metadata. A codec encodes and decodes multimedia data such as audio and video. When creating a video, a video codec encodes and compresses the video while the audio codec does the same with sound.
MP4
MPEG-4 Part 14 or MP4 is one of the earliest digital video file
formats introduced in 2001. Most digital platforms and devices
support MP4. An MP4 format can store audio files, video files,
still images, and text. MP4 provides high quality video while
maintaining relatively small file sizes.
MOV
MOV was designed by Apple to support the QuickTime player. MOV
files contain videos, audio, subtitles, timecodes and other media
types. Since it is a very high-quality video format, MOV files
take significantly more memory space on a computer.
WMV
The Windows Media Viewer (WMV) video format was designed by
Microsoft and is widely used in Windows media players. WMV format
provides small file sizes with better compression than MP4. That
is why it's popular for online video streaming.
AVI
Audio Video Interleave (AVI) works with nearly every web browser
on Windows, Mac, and Linux machines. AVI offers the highest
quality but also the largest file sizes. It is supported by
YouTube and works well for TV viewing.
FLV, F4V, and SWF
They are flash video formats designed for Flash Player, but
they're commonly used to stream video online since they have a
relatively small size which makes them easy to download.
WebM
WebM is an open-source video format developed with the current and
future state of the Internet in mind. WebM is intended for use
with HTML5. The video codecs of WebM require very little computer
power to compress and unzip the files. The aim of this design is
to enable online video streaming on almost any device, such as
tablets, desktop, smartphones or devices like smart TV.
Musical Instrument Digital Interface (MIDI) is a standard protocol for the interchange of musical information between musical instruments, synthesizers and computers. MIDI files allow music and sound synthesizers from different manufacturers to communicate with each other by sending messages along cables connected to the devices.
• MIDI does not record analog or digital sound waves. It encodes keyboard functions, which includes the start of a note, its pitch, length, volume and musical attributes, such as vibrato. As a result, MIDI files take up considerably less space than digitized sound files.
• Since the advent of the General MIDI standard for musical instruments, MIDI has been widely used for music backgrounds in multimedia applications due to its space-saving feature. It is MIDI technology you might be hearing as the latest mobile ring tone or on a thrill ride or attraction at a theme park.
• MIDI's small storage requirement makes it very desirable as a musical sound source for multimedia applications compared to digitizing actual music. For example, a three-minute MIDI file may take only 20 to 30K, whereas a WAV file (digital audio) could consume up to several megabytes depending on sound quality.
• Creating your own original score can be one of the most creative and rewarding aspects of building a multimedia project, and MIDI (Musical Instrument Digital Interface) is the quickest, easiest and most flexible tool for this task.
• The process of creating MIDI music is quite different from digitizing existing audio. To make MIDI audio, you will need the following tools,
• MIDI is preferred over digital audio in the following circumstances,
• Digital audio is preferred over MIDI in the following circumstances,
• MIDI is analogous to structured or vector graphics, while digitized audio is analogous to bitmapped images.
• MIDI is device dependent while digitized audio is device independent.
• MIDI files are much smaller than digitized audio.
• MIDI files sound better than digital audio files when played on a high-quality MIDI device.
• With MIDI, it is difficult to playback spoken dialog, while digitized audio can do so with ease.
• MIDI does not have consistent playback quality while digital audio provides consistent playback quality.
• One requires knowledge of music theory in order to run MIDI, while digital audio does not have this requirement.
• The Use-case model is defined as a model which is used to show how users interact with the system in order to solve a problem. As such, the use case model defines the user's objective, the interactions between the system and the user, and the system's behavior required to meet these objectives.
• We use a use-case diagram to graphically portray a subset of the model in order to make the communication simpler.
• The use-case model acts as an integrated thread in the development of the entire system. The use-case model is used like the main specification of the system functional requirements as the basis for design and analysis, as the basis for user documentation, as the basis of defining test cases, and as an input to iteration planning.
Actor: Usually, actors are people involved with the system defined on the basis of their roles. An actor can be anything such as human or another external system.
Use Case: The use case defines how actors use a system to accomplish a specific objective. The use cases are generally introduced by the user to meet the objectives of the activities and variants involved in the achievement of the goal.
Associations: Associations are another component of the basic model. It is used to define the associations among actors and use cases they contribute in. This association is called communicates-association.
Subject: The subject component is used to represent the boundary of the system of interest.
Use-Case Package: We use the model component in order to structure the use case model to make simpler the analysis, planning, navigation, and communication.
Generalizations: Generalizations mean the association between the actors in order to help re-use of common properties.
Dependencies: In UML, various types of dependencies are defined between use cases. In particular, <<include>> and <<extend>>. We use <<include>> dependency to comprise shared behavior from an included use case into a base use case to use common behavior. We use <<extend>> dependency to include optional behavior from an extended use-case into an extended use case.
• The below figure shows a use-case diagram from an Automated Teller Machine (ATM) use-case model.
• This diagram shows the subject (ATM), four actors (Bank Customer, Bank, Cashier and Maintenance Person), five use cases (Withdraw Cash, Transfer Funds, Deposit Funds, Refill Machine and Validate User), three <<include>> dependencies, and the associations between the performing actors and the use cases.
The object-oriented design process consists of following activities:
1) Apply design axioms to design classes, their attributes, methods, associations, structures & protocols.
2) Design the access layer
3) Design the view layer
4) Iterate and refine the whole design. Reapply the design axioms and repeat the preceding steps.
• Axiom is a fundamental truth that is always observed to be valid and for which there is no counterexample or exception. They cannot be proven or derived.
• Suh's design axioms to OOD :
Axiom 1: (The Independent Axiom)
This
Axiom maintains the independence of components. It states
that, during the design process, as we go from requirement and
use - case to a system component, each component must satisfy
that requirement, without affecting other requirements
Axiom 2: (The Information Axiom)
This axiom is concerned with simplicity. The goal is to
minimize the information content of the design. In
object-oriented system, to minimize complexity use inheritance
and the system's built in classes and add as little as
possible to what already is there.
The process of designing view layer classes is divided into four major activities:
• The main goal of this level is to identify classes that interacts with human actors by analyzing the use cases developed in the analysis phase.
• The view layer macro process consists of two steps:
• The view layer micro process consists of two steps:
Rule 1: Making the interface simple. This rule is an applicaion of corollary 2 (single purpose).
Rule 2: Making the interface Transparent and Natural. This rule is an applicaion of corollary 4.
Rule 3: Allowing the users to be in Control of the software. This rule is an applicaion of corollary 1. Some of the ways to put users in control are,
• View layer interface can also be called as User Interface (UI)
• The main goal of UI is to display & obtain needed information in an accessible, efficient manner. A well defined UI has visual appeal that motivates users to use application.
• The user interface can employ one or more windows. Windows are commonly used for the following purposes,
• The main idea behind creating an access layer is to create a set of classes that know how to communicate with the place(s) where the data actually reside.
• The access classes must be able to translate any data-related requests from the business layer into the appropriate protocol for data access.
• The access layer performs two major tasks,
• The process of creating an access class for the business classes are identified as follows
• To maximize cohesiveness (interconnection) among objects and software components to improve coupling (combination, pairing), because only a minimal amount of essential information should be passed between components.
• Abstraction leads to simplicity and straight- forwardness and, at the same time, increases class versatility (flexibility, resourcefulness, usefulness).
• If it looks messy (confused, disorder), then it's probably a bad design.
• If it is too complex, then it's probably a bad design.
• If it is too big, then it's probably a bad design.
• If people don't like it, then it's probably a bad design.
• If it doesn't work, then it's probably a bad design.
• Apply design axioms to avoid common design problems and pitfalls.
• Much better to have a large set of simple classes than a few large, complex classes.
• Rethink the class definition based on experience gained.
• Quality Assurance is defined as a procedure to ensure the quality of software products or services provided to the customers by an organization. Quality assurance focuses on improving the software development process and making it efficient and effective as per the quality standards defined for software products. Quality Assurance is popularly known as QA Testing.
• Quality assurance testing can be divided into two major categories: error-based testing and scenario-based testing.
• The below tests can also be performed to ensure the software quality & reliability.
• Black Box Testing is a software testing method in which the functionalities of software applications are tested without having knowledge of internal code structure, implementation details and internal paths. Black Box Testing mainly focuses on input and output of software applications and it is entirely based on software requirements and specifications. It is also known as Behavioral Testing.
• It is used for validation. In this we ignore internal working mechanism and focus on what is the output?.
• White Box Testing is software testing technique in which internal structure, design and coding of software are tested to verify flow of input-output and to improve design, usability and security. In white box testing, code is visible to testers so it is also called Clear box testing, Open box testing, Transparent box testing, Code-based testing and Glass box testing.
• It is used for verification. In this we focus on internal mechanism i.e. how the output is achieved?.
Unit Testing is a type of software testing where individual units or components of a software are tested. The purpose is to validate that each unit of the software code performs as expected. Unit Tests isolate a section of code and verify its correctness. A unit may be an individual function, method, procedure, module, or object. Unit testing is a WhiteBox testing technique that is usually performed by the developers during the development (coding phase) of an application.
Integration Testing is as a type of testing where software modules are integrated logically and tested as a group. A typical software project consists of multiple software modules, coded by different programmers. The purpose of this level of testing is to expose defects in the interaction between these software modules when they are integrated.
Top Down Integration testing which is also known as Incremental integration testing. In this Top Down approach the higher level modules are tested first after higher level modules the lower level modules are tested. Then these modules undergo for integration accordingly. Here the higher level modules refers to main module and lower level modules refers to submodules. This approach uses Stubs which are mainly used to simulate the submodule, if the invoked submodule is not developed this Stub works as a momentary replacement.
Bottom Up Integration testing is another approach of Integration testing. In this Bottom Up approach the lower level modules are tested first after lower level modules the higher level modules are tested. Then these modules undergo for integration accordingly. Here the lower level modules refers to submodules and higher level modules refers to main modules. This approach uses test drivers which are mainly used to initiate and pass the required data to the sub modules means from higher level module to lower level module if required.
• Usablity is composed of effectiveness, efficiency, and satisfaction with which a specified set of users can achieve a specified set of tasks in particular environments. To achieve better usability we should,
• Usability testing measures the ease of use as well as the degree of comfort and satisfaction users have with the software. Products with poor usability can be difficult to learn, complicated to operate, misused or not used at all.
• Usability is one of the most crucial factors in the design and development of a product, especially the user interface. Therefore, usability testing must be a key part of the UI design process.
• Usability test cases begin with the identification of use cases that can specify the target audience, tasks, and test goals.
• The usability testing should include all of a software's components.
• Usability testing need not be very expensive or elaborate.
• All tests need not involve many subjects. Typically, quick, iterative tests with small, well - targeted sample of 6 - 10 participants can identify 80 - 90 percent of most design problems.
• The test participants should be novices or should be at intermediate level.
• User satisfaction testing is the process of quantifying the usability test with some measurable attributes of the test, such as functionality, cost, reliability, intuitive UI or ease of use.
• The format of every user satisfaction test is basically same with different contents for each project.
• The work must be done with users or clients finding out what attributes should be included in test. Ask users to select limited number (5 to 10) of attributes by which final product can be evaluated.
• User might select following attributes for customer tracking system,
• Ask the users to give their judgment points to each attribute with a score from 1-10
• By analyzing the score given for all attributes by all users. We can decide and what needs to be changed in out design.
A Test Plan is a detailed document that describes the test strategy, objectives, schedule, estimation, deliverables, and resources required to perform testing for a software product. Test Plan helps us determine the effort needed to validate the quality of the application under test. The test plan serves as a blueprint to conduct software testing activities as a defined process, which is minutely monitored and controlled by the test manager.
• A test case is a document which has a set of conditions or actions that are performed on the software application in order to verify the expected functionality of the feature.
• They describe a specific idea that is to be tested, without detailing the exact steps to be taken or data to be used.
• They give flexibility to the tester to decide how they want to execute the test.
• For example, in a test case, you document something like 'Test if coupons can be applied on actual price'. This doesn't mention how to apply the coupons or whether there are multiple ways to apply. It also doesn't mention if the tester uses a link to apply a discount, or enter a code, or have a customer service apply it.
An efficient test case design technique is necessary to improve the quality of the software testing process. The test case design techniques are broadly classified into three major categories:
The impacts are as follows
Reusability of tests: Marick says that the simpler is a test, the more likely it is to be reusable in sub-classes. Simple tests find only faults. Complex tests find faults and also stumble across others.
Impact of Inheritance in Testing: If designers do not follow OOD guidelines especially, if test is done incrementally, it will lead with objects that are extremely hard to debug and maintain
Similarly, the concept of inheritance opens various issues e.g., if changes are made to a parent class or superclass, in a larger system of a class it will be difficult to test subclasses individually and isolate the error to one class.
• Object oriented development offers a different model from the traditional software development approach. This is based on functions and procedures. To develop software by building self contained modules or objects that can be easily replaced, modified and reused.
• In Object oriented environment, software is a collection of discrete object that encapsulate their data as well as the functionality to model real world objects. Each object has attributes (data) and method (function). Objects are grouped in to classes and objects are responsible for it. A chart object is responsible for things like maintaining its data and labels and even for drawing itself.
• The basic software development life cycle consists of analysis, design, implementation, testing and refinement. Its main aim is to transform users' needs into a software solution.
• The development is a process of change, refinement, transformation or addition to the existing product. The software development process can be viewed as a series of transformations, where the output of one transformation becomes input of the subsequent transformation.
• Transformation 1 (analysis) translates the users' needs into system requirements & responsibilities.
• Transformation 2 (design ) begins with a problem statement and ends with a detailed design that can be transformed into an operational system. It includes the bulk of s/w development activity.
• Transformation 3 (implementation) refines the detailed design into the system deployment that will satisfy the users' needs. It represents embedding s/w product within its operational environment.
• An example of s/w development process is the waterfall approach which can be stated as below
• Object oriented development offers a different model from the traditional software development approach. This is based on functions and procedures. To develop software by building self contained modules or objects that can be easily replaced, modified and reused.
• In Object oriented environment, software is a collection of discrete object that encapsulate their data as well as the functionality to model real world objects. Each object has attributes (data) and method (function). Objects are grouped in to classes and objects are responsible for it. A chart object is responsible for things like maintaining its data and labels and even for drawing itself.
• Allows higher level of abstraction: Object oriented approach supports abstraction at the object level. Since objects encapsulate both data (attributes) & functions (methods), they work at a higher level of abstraction. This makes designing, coding, testing & maintaining the system much simpler.
• Provides Seamless transition among different phases of software development: Object oriented approach, essentially uses the same language to talk about analysis, design, programming and database design. This seamless approach reduces the level of complexity and redundancy and makes for clearer, more robust system development.
• Encourage good development techniques: In a properly designed system, the routines and attributes within a class are held together tightly, the classes will be grouped into subsystems but remain independently and therefore, changing one class has no impact on other classes and so, the impact is minimized.
• Promotes of reusability: Objects are reusable because they are modeled directly out of a real - world problem domain. Here classes are designed, with reuse as a constant background goal. All the previous functionality remains and can be reused without changed.
The goal of a feasibility study is to determine whether a proposed project is worth pursuing. The study examines two fundamental categories costs and potential benefits. A feasibility study should provide management with enough information to decide, they are
Feasibility study is divided into costs and benefits,
Costs are divided into up-front or one-time costs and ongoing costs.
Almost all projects will entail similar up-front costs. The organization will often have to purchase additional hardware, software and communication equipment. These costs can be estimated with the help of vendors.
Once the project is completed and the system installed, costs will arise from several areas,
Benefits are divided into cost savings, increased value and strategic advantages
Database testing checks the integrity and consistency of data by verifying the schema, tables, triggers, etc., of the application's database that is being tested. In Database testing, we create complex queries to perform the load or stress test on the database and verify the database's responsiveness.
An issue in the database might cause a crash or leakage of data, we are aware of the importance of keeping the privacy of the user's data. So, it is crucial to perform database testing to ensure data integrity, consistency, etc., are being maintained.
By performing database testing we can guarantee the security and reliability of an application as it exposes the vulnerability in the database.
Consider an application that captures the day-to-day transaction details for users and stores the details in the database. From database testing point of view, the following checks should be performed,
Client/server describes the relationship between two computer programs in which one program, the client, makes a service request from another program, the server, which fulfills the request.
In a network, the client/server model provides a convenient way to interconnect programs that are distributed efficiently across different locations. The client/server model has become one of the central ideas of network computing.
Client-server computing is a software engineering technique often used within distributed computing that allows two independent processes to exchange information, through a dedicated connection, following an established protocol.
Client-server system is mainly two different processors that exchange information on common server. Unlike peer to peer system there is one server that stores the information and gives it only when the client provides specific identification like username.
Functions such as email exchange, web access and database access, are built on the client-server model.
To check the bank account from computer, a client program in the computer forwards the request to a server program at the bank. That program may in turn forward the request to its own client program that sends a request to a database server at another bank computer to retrieve the account balance. The balance is returned back to the bank data client, which in turn serves it back to the client in the personal computer, which displays the information for us.
The World Wide Web (abbreviated as WWW or W3, commonly known as the Web), is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia, and navigate between them via hyperlinks.
The World Wide Web was designed as a client/server system. The objective was to enable people to share information with their colleagues. The clients run browser software that receives and displays data files. The servers run web server software that answers requests, finds the appropriate files, and sends the required data.
Data Manipulation Language (DML) commands in SQL deals with manipulation of data records stored within the database tables. It does not deal with changes to database objects and its structure.
Used to query or fetch selected fields or columns from a database table.
SELECT column_name
FROM table_name;
Used to insert new data records or rows in the database table.
INSERT INTO table_name
VALUES (a list of data values);
Used to set the value of a field or column for a particular record to a new value.
UPDATE table_name
SET column_name = value
WHERE
condition;
Used to remove one or more rows from the database table.
DELETE FROM table_name
WHERE condition;
Data Definition Language or DDL commands in standard query language(SQL) are used to describe/define the database schema. These commands deal with database schema creation and its further modifications.
Used for creating database objects like a database or a database table.
CREATE DATABASE database_name;
CREATE TABLE table_name(
col_name_1 datatype CONSTRAINT,
col_name_2 datatype CONSTRAINT,
.
.
col_name_n datatype CONSTRAINT);
Used for modifying and renaming elements of an existing database table.
ALTER TABLE table_name
ADD (column_name datatype);
ALTER TABLE table_name_1
RENAME TO table_new_name;
ALTER TABLE table_name
DROP column_name;
Used to remove all the records from a database table.
TRUNCATE TABLE table_name;
Used for removing an entire database or a database table.
DROP DATABASE database_name;
DROP TABLE table_name;
DCL is abbreviation of Data Control Language. It is used to create roles, permissions, and referential integrity as well it is used to control access to database by securing it.
Used to provide authorization to one or more users to perform an operation or a set of operations on a database object.
GRANT privileges ON object TO user;
Used to Withdraw user's access privileges to database object given with the GRANT command.
REVOKE privileges ON object FROM user;
Transaction Control Language commands are used to manage transactions in the database. Managing transactions simply means managing the changes made by DML-statements. It also allows statements to be grouped together into logical transactions.
Used to permanently save any transaction into the database.
COMMIT;
Used to temporarily save a transaction so that you can rollback to that point whenever necessary.
SAVEPOINT savepoint_name;
Used to restore the database to last committed state. It is also used with savepoint command to jump to a savepoint in a transaction.
ROLLBACK;
• Cluster analysis or clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense
• In other words, "Clustering" is the process of organizing data into meaningful groups and these groups are called clusters.
• Data clustering improves performance by identifying data that is commonly accessed together.
• Only some of the large transaction-oriented database systems support clustering i.e. ORACLE.
Clustering the data improves application speed by reducing the number of disk accesses
Clustering is useful in several exploratory pattern-analysis, grouping, decision-making and machine learning situations including data mining, document retrieval, image segmentation and pattern classification.
• Data partitioning is a technique to improve application performance or security by splitting tables across multiple locations.
• Data partitioning is the process of logically or physically partitioning data into segments that are more easily maintained or accessed.
• Horizontal partitioning is often used to split data so that it can be stored in location where it will use the most. It involves putting different rows into different tables.
• Some of the rows (i.e. active data) will be stored in one location and other rows (i.e. inactive data) will be stored in a different location. The active data will be stored on high speed disk drives.
• The user does not need to know about the split because the DBMS automatically retrieve data from either location.
• Vertical partitioning involves creating tables with fewer columns and using additional tables to store the remaining columns.
• In vertical partitioning active or frequently used columns of data are stored on a faster drive, while inactive or rarely used columns are moved to a cheaper and slower drives.
• Overall performance will improve because the DBMS will be able to retrieve more of the smaller rows.
• Vertical partitioning is useful for limiting the amount of data that need to read into memory.
The DBMS is preferred over the conventional file processing system due to the following advantages,
Redundancy means repitition or duplication of data. In DBMS, the data redundancy can be controlled or reduced but it is not removed completely. By controlling the data redundancy, you can save storage space.
The DBMS also has system to maintain data consistency with minimal effort.
Since data in database is stored in tables, any data in the database can be easily retrieved, combined and compared using the query system.
In DBMS, data can be shared by authorized users of the organization. The DBA manages the data and gives rights to users to access the data. Many users can be authorized to access the same set of information simultaneously.
The DBMS provides tools that can be used to develop application programs thereby reduces the cost and time for developing new applications.
Centralized control and standard procedures can improve data protection & data integrity in DBMS.
A DBMS is evaluated based on the following components,
Database Engine
Data Dictionary
Query Processor
Report Writer
Forms Generator
Application Generator
Communication and Integration
Security and Other Utilities
The database engine is the heart of the dbms. The engine is responsible for defining, storing and retrieving the data. It affects the performance (speed) and the ability to handle large problems (scalability). It is a stand-alone component that can be purchased and used as an independent software module. Example: Microsoft Jet Engine
The data dictionary holds the definition of all the tables. A data dictionary is a reserved space within a database which is used to store information about the database itself i.e. design information, user permissions etc.
It is the fundamental component of DBMS. It enables developers and users to store and retrieve data. All database operations can be run through the query language. Query language is processed by the query processor.
Report is a summarization of database data. It extracts information from one or more files and presents the information in a specified format.
Software design is the process to transform the user requirements into some suitable form, which helps the programmer in software coding and implementation.
The software design concept simply means the idea or principle behind the design. It describes how you plan to solve the problem of designing software, the logic, or thinking behind how you will design software.
Testing is the process of executing a program with the aim of finding errors. To make our software perform well it should be error-free. If testing is done successfully it will remove all the errors from the software.
It focuses on the smallest unit of software design. In this, we test an individual unit or group of interrelated units. It is often done by the programmer by using sample input and observing its corresponding outputs.
Example:
The objective is to take unit tested components and build a program structure that has been dictated by design. Integration testing is testing in which a group of components is combined to produce output.
Integration testing is of four types: (i) Top-down (ii) Bottom-up (iii) Sandwich (iv) Big-Bang
Example:
Every time a new module is added leads to changes in the program. This type of testing makes sure that the whole component works properly even after adding components to the complete program.
Example:
This test is done to make sure that software under testing is ready or stable for further testing It is called a smoke test as the testing an initial pass is done to check if it did not catch the fire or smoke in the initial switch on.
Example:
This is a type of validation testing. It is a type of acceptance testing which is done before the product is released to customers. It is typically done by QA people.
The beta test is conducted at one or more customer sites by the end-user of the software. This version is released for a limited number of users for testing in a real-time environment
This software is tested such that it works fine for the different
operating systems. It is covered under the black box testing
technique. In this, we just focus on the required input and output
without focusing on internal working.
In this, we have
security testing, recovery testing, stress testing, and
performance testing
In this, we give unfavorable conditions to the system and check how they perform in those conditions.
Example:
It is designed to test the run-time performance of software within the context of an integrated system. It is used to test the speed and effectiveness of the program. It is also called load testing. In it we check, what is the performance of the system in the given load.
Example:
This testing is a combination of various testing techniques that help to verify and validate object-oriented software. This testing is done in the following manner:
Example:
Software cost estimates are based on past perfomance. Cost estimates can be made either (i)top-down (ii)bottom-up
Most widelly used cost estimation techniques(top down)
Expert judgement relies on the experience background and business sense of one or more key people in an organisation mode.
Example: An expert might arrive at a cost estimate in a following manner,
This technique was developed at the rand corporation in 1948
This technique can be adapted to software estimation in the following manner,
This family of models was proposed by Boehm in 1981. The models have been widely accepted in practice. In the COCOMOs, the code-size S is given in thousand LOC (KLOC) and efforts is on person-month. COCOMO is actually a hierarchy of estimation models that address the following areas:
Application composition model: Used during the early stages of software engineering, when prototyping of user interfaces, consideration of software and system interaction, assessment of performance, and technology maturity are paramount.
Early design stage model: Used once requirements have stabilized and basic software architecture has been established.
Post-architecture-stage model: Used during the construction of the software.
Basic COCOMO: This model uses three sets of {a, b} depending only on the complexity of the software.
Intermediate & Detailed COCOMO: In the intermediate COCOMO, a nominal effort estimation is obtained using the power function with three sets of {a, b}, with coefficient a being slightly different from that of the basic COCOMO.
Then, fifteen Cost Factors with values ranging from 0.7 to 1.66 are determined. The overall impact factor 'M' is obtained as the product of all individual factors, and the estimate is obtained by multiplying 'M' to the nominal estimate
While both basic and intermediate COCOMOs estimate the software cost at the system level, the detailed COCOMO works on each sub-system separately and has an obvious advantage for large systems that contain non-homogenous subsystems.
Software Requirement Specification (SRS) Format as name suggests, is complete specification and description of requirements of software that needs to be fulfilled for successful development of software system. These requirements can be functional as well as non-functional depending upon type of requirement. The interaction between different customers and contractor is done because its necessary to fully understand needs of customers.
Depending upon information gathered after interaction, SRS is developed which describes requirements of software that may include changes and modifications that is needed to be done to increase quality of product and to satisfy customer's demand.
At first, main aim of why this document is necessary and what's the purpose of document is explained and described.
In this, overall working and main objective of document and what value it will provide to customer is described and explained. It also includes a description of development cost and time required.
In this, description of product is explained. It's simply summary or overall review of product.
In this, general functions of product which includes objective of user, a user characteristic, features, benefits, about why its importance is mentioned. It also describes features of user community.
In this, possible outcome of software system which includes effects due to operation of program is fully explained. All functional requirements which may include calculations, data processing, etc. are placed in a ranked order.
In this, software interfaces which mean how software program communicates with each other or users either in form of any language, code, or message are fully described and explained. Examples can be shared memory, data streams, etc.
In this, how a software system performs desired functions under specific condition is explained. It also explains required time, required memory, maximum error rate, etc.
In this, constraints which simply means limitation or restriction are specified and explained for design team. Examples may include use of a particular algorithm, hardware and software limitations, etc.
In this, non-functional attributes are explained that are required by software system for better performance. An example may include Security, Portability, Reliability, Reusability, Application compatibility, Data integrity, Scalability capacity, etc.
In this, initial version and budget of project plan are explained which include overall time duration required and overall cost required for development of project.
In this, additional information like references from where information is gathered, definitions of some specific terms, acronyms, abbreviations, etc. are given and explained.