Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Taguchi Methods and Quality Costs in Business Operations, Thesis of Production and Operations Management

Operations Management asswer heets

Typology: Thesis

2018/2019

Uploaded on 02/05/2019

samson-h
samson-h 🇮🇳

1 document

1 / 8

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Q1)Taguchi Methods
The term 'Taguchi methods' is normally used to cover two related ideas. The
rst is that, by the use of statistical methods concerned with the analysis of
variance, experiments may be constructed which enable identication of the
important design factors responsible for degrading product performance. The
second (related) concept is that when judging the eectiveness of designs,
the degree of degradation or loss is a function of the deviation of any design
parameter from its target value.
These ideas arise from development work undertaken by Dr Genichi Taguchi
whilst working at the Japanese telecommunications company NTT in the
1950s and 1960s. He attempted to use experimental techniques to achieve
both high quality and low-cost design solutions.
He suggested that the design process should be seen as three stages:
systems design;
parameter design; and
tolerance design.
The Taguchi methodology suggests that instead of this implied step function
of acceptability, a more realistic function is used based on the square of the
deviation from the ideal target, i.e. that customers/users get signicantly
more dissatised as performance varies from ideal.
This function, the quality loss function, is given by the expression:
L = k ( x - a )2
where
L = the loss to society of a unit of output at value x
a = the ideal state target value, where at a, L = 0
k = a constant
A common criticism of the Taguchi loss function is that while the form of the
loss function may be regarded in most cases as being more realistic than a
step function, the practicalities of determining the constant k with any degree
of accuracy are formidable. Quoted successful applications of the Taguchi
methodology are frequently associated with relatively limited aspects of
design, for example single parts, rather than very complex products or
services. Some designers and academics also argue that the results of
pf3
pf4
pf5
pf8

Partial preview of the text

Download Taguchi Methods and Quality Costs in Business Operations and more Thesis Production and Operations Management in PDF only on Docsity!

Q1)Taguchi Methods

The term 'Taguchi methods' is normally used to cover two related ideas. The first is that, by the use of statistical methods concerned with the analysis of variance, experiments may be constructed which enable identification of the important design factors responsible for degrading product performance. The second (related) concept is that when judging the effectiveness of designs, the degree of degradation or loss is a function of the deviation of any design parameter from its target value.

These ideas arise from development work undertaken by Dr Genichi Taguchi whilst working at the Japanese telecommunications company NTT in the 1950s and 1960s. He attempted to use experimental techniques to achieve both high quality and low-cost design solutions.

He suggested that the design process should be seen as three stages:

systems design;

parameter design; and

tolerance design.

The Taguchi methodology suggests that instead of this implied step function of acceptability, a more realistic function is used based on the square of the deviation from the ideal target, i.e. that customers/users get significantly more dissatisfied as performance varies from ideal.

This function, the quality loss function, is given by the expression:

L = k ( x - a )

where

L = the loss to society of a unit of output at value x

a = the ideal state target value, where at a, L = 0

k = a constant

A common criticism of the Taguchi loss function is that while the form of the loss function may be regarded in most cases as being more realistic than a step function, the practicalities of determining the constant k with any degree of accuracy are formidable. Quoted successful applications of the Taguchi methodology are frequently associated with relatively limited aspects of design, for example single parts, rather than very complex products or services. Some designers and academics also argue that the results of

Taguchi methodology may not always provide better design solutions than obtained by conventional means.

However, the critics often seem to miss the point that Taguchi methods are not just a statistical application of design of experiments; the methods include the integration of statistical design of experiments into a wider and more powerful engineering process. The true power of the methodology comes from its simplicity of implementation.

The methods are often applied by technicians on the Japanese manufacturing floor to improve their product and their processes. The goal is not simply to optimise an arbitrary objective function, which is how Westerners often regard them, but rather to reduce the sensitivity of engineering designs to uncontrollable factors or noise. The objective function used is the signal to noise ratio which is maximized. This moves design targets toward the middle of the design space so that external variation effects behaviour as little as possible. This permits large reductions in both part and assembly tolerances which are major drivers of manufacturing cost.

Q2)Quality costs are the costs associated with preventing, detecting, and remediating product issues related to quality. Quality costs do not involve simply upgrading the perceived value of a product to a higher standard. Instead, quality involves creating and delivering a product that meets the expectations of a customer. Thus, if a customer spends very little for an automobile, he will not expect leather seats and air conditioning - but he will expect the vehicle to run properly. In this case, quality is considered to be a vehicle that functions, rather than a luxury experience.

Quality costs fall into four categories, which are:

Prevention costs. You incur a prevention cost in order to keep a quality problem from occurring. It is the least expensive type of quality cost, and so is highly recommended. Prevention costs can include proper employee training in assembling products and statistical process control (for spotting processes that are beginning to generate defective goods), as well as a robust product design and supplier certification. A focus on prevention tends to reduce preventable scrap costs, because the scrap never occurs.

Appraisal costs. As was the case with a prevention cost, you incur an appraisal cost in order to keep a quality problem from occurring. This is done through a variety of inspections. The least expensive is having production workers inspect both incoming and outgoing parts to and from their workstations, which catches problems faster than other types of inspection. Other appraisal costs include the destruction of goods as part of the testing

the information comes essentially in two ways: the knowledge gathered by experts and actual data.If no data is yet available, the information must come from the judgments made by experts in the area. If the forecast is based solely on judgment and no actual data, we are in the field of qualitative forecasting.If data is available on the subject, a model is used to analyze the data and predict future values. This is called quantitative forecasting. A good example is predicting the sales for a given product in order to replenish stocks accordingly. This can even be done on a daily basis if you use a good forecasting tool for small business.

  1. Perform a Preliminary Analysis

An early analysis of the data may tell you right away if the data is usable or not. It may also reveal patterns or trends that can then be helpful, for example, in choosing the model that best fits it.Another thing that can be done here is to check for redundant data and cut it down or make some educated assumptions. By reducing the amount of data to analyze you can greatly simplify the entire process.

  1. Choose the Forecasting Model

Once all the information is collected and treated, you may then choose the model you think will give you the best prediction possible. There is not one single model that works best in all situations, it all depends on the availability and nature of the available data.

Qualitative Forecasting

As we’ve seen before, we may not even have any historical data, in which case we have to use qualitative forecasting.

Two models that are commonly used in qualitative forecasting are a market research and the Delphi method. A market research is performed by enquiring a large number of people about their willingness to purchase a possible product or service.

The Delphi method consists of gathering forecasts from several different experts in a given area, and then compiling all that information into a single forecast. It relies on the assumption that a collective forecast is more accurate than that of a single person.

Quantitative Forecasting

If sufficient data is available, the human factor can be removed from the equation and a raw data analysis can be performed to predict future values. A lot of mathematical values exist to do these predictions, including

regression models, exponential smoothing models, Box-Jenkins ARIMA models and others.

Some forecasting tools for small business, like DataQlick, use an Exponential Moving Average Calculation model to predict product sales.

  1. Data analysis

This step is simple. After choosing a suitable model, run the data through it.

  1. Verify Model Performance

When the time comes, it is very important to compare your forecast to the actual data. This allows you to evaluate the accuracy of not only the model, but the entire process, and change each step accordingly. Hopefully, if you use a good forecasting tool for small business, there won’t be much tweaking needed!

Q4)Correlation is described as the analysis which lets us know the association or the absence of the relationship between two variables ‘x’ and ‘y’. On the other end, Regression analysis, predicts the value of the dependent variable based on the known value of the independent variable, assuming that average mathematical relationship between two or more variables.

The difference between correlation and regression is one of the commonly asked questions in interviews. Moreover, many people suffer ambiguity in understanding these two. So, take a full read of this article to have a clear understanding on these two.

Key Differences Between Correlation and Regression

The points given below, explains the difference between correlation and regression in detail:

A statistical measure which determines the co-relationship or association of two quantities is known as Correlation. Regression describes how an independent variable is numerically related to the dependent variable.

Correlation is used to represent the linear relationship between two variables. On the contrary, regression is used to fit the best line and estimate one variable on the basis of another variable.

In correlation, there is no difference between dependent and independent variables i.e. correlation between x and y is similar to y and x. Conversely, the regression of y on x is different from x on y.

The proliferation of audiovisual communications technologies, including sound, video, lighting, display and projection systems, is evident in every sector of society: in business, education, government, the military, healthcare, retail environments, worship, sports and entertainment, hospitality, restaurants, and museums. The application of audiovisual systems is found in collaborative conferencing (which includes video- conferencing, audio-conferencing, web-conferencing and data-conferencing); presentation rooms, auditoriums, and lecture halls; command and control centers; digital signage, and more. Concerts and corporate events are among the most obvious venues where audiovisual equipment is used in a staged environment. Providers of this type of service are known as rental and staging companies, although they may also be served by an in-house technology team (e.g., in a hotel or conference center).

According to a 2006 market forecast study by InfoComm International, a leading trade association representing the audiovisual industry, 2006 was the fourth consecutive year that significant growth was projected for the industry. [citation needed] Revenue for surveyed North American companies was expected to grow by 40% in 2006, and by 10.7% for European audiovisual companies. The single biggest factor for this increase is the increased demand for networked audiovisual products due to the integration of audiovisual and IT technology. The two leading markets for AV equipment in North America and Europe continue to be business/IT and education, especially as conference room technologies become more advanced.

Q8)Definition: The Test marketing is a tool used by the companies to check the viability of their new product or a marketing campaign before it is being launched in the market on a large scale.

The market test is generally carried out to ascertain the probable market success in terms of new product’s performance, the level of acceptance of the product, customer satisfaction, and the efficiency of the marketing campaign.

Through test marketing, a marketer may ascertain the success ratio of the new product and the marketing campaign and can design the marketing mix ( viz. Product, price, place, promotion) very well before its launch.

The test marketing of the consumer goods and the industrial goods vary,

To ascertain these variables the following test are conducted:

Sales-Wave Research: Under this test, the consumer is offered the product, again and again, free of cost. This is done to determine the willingness of the customers to use the product every time it is offered.

Simulated Test Marketing: Under this test, 30-40 customers are selected and are invited to the store where they can buy anything. The new products are placed with the old or competitor’s product and then consumer’s preference is ascertained through their selection of the products.

In case, the new product is not chosen by them, then the free samples are given to the customers and are inquired telephonically about their product experience after some weeks.

Controlled Test Marketing: Under this test, the company select certain stores in different geographic areas and ask them to keep its new product into their stores in return for a fee. The company controls the shelf position, displays, point of purchase promotions and pricing.

Test Markets: Under this, the firm chooses the representative cities where the full-fledged launch of the new product is done starting from the promotion campaign to the ultimate sales. Once it is successful, the firm goes for the national launch.