Script

Correct Mind Set to Problem Solving


One of the most powerful ingredient to achieve consistent good quality would be the organization positive mindset starting from top management.  There are many quality/six sigma and lean tools created over the past century to assist in problem solving which enable company to achieve their quality goal.   In order to apply lean or six sigma/quality tools successfully to achieve consistent good quality product and bring value to customer,  the whole organization must have the positive mindset and require discipline system starting top down. 

The role of organization leader is very important in cascading positive mindset to the organization team members top down follow follow the law of gravity.   Leadership layer in management of an organization must have clear awareness of  problem and potential problem and risk.  Only from awareness of the problem, then action plan can be established to address the problem followed by carefully orchestrated execution of the plan to successfully resolve the problem.  A problem which get the management attention gets resolve faster than problem which is being hidden in the shop floor. 

Dr Deming (1900-1993) 14 points for total quality management required the top leader involvement in implementation.  In lean enterprise,  top management must not be too far away from the day to day problem.  Leader in an organization are suppose to do a regular gemba walk on the production floor or where the activity of transforming input part to final product happen.  Gemba walk is not just about walking the shop floor,  gemba walk must be designed with objective of eliminating waste which include quality related problem. 

A lot organizations had hire the best people  which will definitely help to bring an organization quality goal to reality however the best talent cannot function alone.  The best people can provide creative solution based on their knowledge and skill set,  they will need the team to mobilize the plan solution then only problem get resolve and goal get realize.  The biggest authority to mobilize a team to execute a plan would be the top management.  Before the mobilization of the plan happen, mobilization of mind had to happen. 
Only with positive open mindset, then the organization will be open to learning new quality methodology and applied what had been learned.  Applying all quality tools will be painful initially as it would mean changes and change is painful to an average human being.    Accepting the changes would require a positive mindset.  The reward come after change would be great and unfortunately most people are only hoping for the reward without willing to pay the price of change.



Validate an Improvement in Key Quality Characteristic


Let’s say we put in a lot effort to reduce all the variations in process input,  the next question would be how do we know if we have really improve the key quality characteristics.

To begin with, we must be able identify what is the key quality characteristic to the consumer and if the key quality characteristic is measurable.  We cannot manage what we cannot measure (Deming, E.W).  Do not jump into improving process yet before collecting the current performance of the key characteristic.  Once the current performance is known such as yield rate of certain quality attributes or process capability of the quality variables,  then improvement effort can kick start.  This is follow by data collection again to gather data on quality performance index after improvement. 

In relation to our above question,  we will need to compare before and after improvement plan data to validate if there is real improvement.  This mean we will need to check  if there is any real shift either in variation or mean of the process.
Shift in process center and variation
Shift in process center 

We have to test our hypothesis that process output quality had improved using statistical hypothesis testing check if null hypothesis, Ho or alternate hypothesis, Ha is valid.  Ho usually state that there is no change in status quo or there is NO change in process output quality and Ha state there is a change in process output quality.  It is also known as comparative statistics method.  In this technique we can compare the following :-

  1. Variable data process center such as mean/median before and after improvement
  2. Variable data variation before and after improvement - ANOVA
  3. Attribute data mean before and after improvement.

Although statistical computer software had made this technique become simple with a few press of button to get the analyzed results,  unfortunately this technique is not widely used or it is not deployed correctly.  It could be due to :

  1. Sample size is not sufficient to detect if there is a shift in the process
  2. Sample does NOT represent actual population
  3. Data measurement process is not validated or corrected
  4. Do not use the correct test
  5. Do not know how to interpret the results
  6. Do not check whether the data is normally distributed
  7. Do not understand the concept of confidence interval which is use to estimate the population attribute in process center and variation.


Statistical comparative methods is a very important technique in decision making such as before making a huge investment to change process.  It is the technique to check whether there is a real improvement being made and couple with statistical process control it can also determine if the improvement is sustainable.  This is especially critical in high volume mass production environment where it is not possible to measure every single output and yet we have to ensure every single piece in whole population is consistently good quality.


How to Determine Process Inputs which have an Impact on Process Output

If you had been following all the articles in this website,  you would have already know that there are 6 types of process inputs, Xs  (man, machine, method, material, measure and environment) which could impact a process output/s, Y/s.   However all the 6 process inputs do not impact process output Y in the same magnitude.



There are some process inputs could have very minimal impact while other process inputs could have more influence to the process output.  In turn each process inputs could have its own numerous factors which could impact key output parameters to customers.  

Process input
Examples of factors
Man
Operator training, experience, skills, type of training program,  skill of trainer, management direction etc
Machine
Machine brand, setting of various parameters, level of automation
Method
Work instruction clarity, creator of work instruction, process step and layout, skillset of engineer
Material
Different vendor, different batch, raw material,  manufacturing variation contributed by 5M 1E
Measure
Measurement instrument, measurement method,
Environment
Humidity, temperature, pollution , seasons etc

In order to achieve consistent quality products as perceived by customer, manufacturers must be able to find which vital few factors from each process input/s has/have  major effect on the process output and then control the setting those factors.

The best methodology  to determine which process input factor/s which the most influence on process output which are key to customer would be design of experiment (DOE) .  DOE is a systematic planning and conducting a series of experimental runs in which controlled changes are made to inputs in order to observe and identify causes for changes in the outputs of a system or process. 



DOE methodology consist of the following steps using a good statistical software such as minitab or JMP :-
  1. Define  - Understand the quality related problem, identify the key parameters output to customer known as responses, Y.  Use skills and experience to map to the potential input X
  2. Design -  Select process input X and  set process input X to high and low setting (level), and design the experiment according to number of factors and setting levels
  3. Conduct 1st experiment  – Verify measurement system for process output measurement (refer to this article and what is measurement system verification http://www.360qualitymanagement.com/2017/09/importance-of-performing-gage.html).  Run experiment according to design and collect data on process output
  4. Analyze – Develop a prediction model to estimate the effect of process input factor, X to process output Y.  Identify the potential process input factor which have significant effect on process output Y
  5. Optimize throught 2nd or more experiment  – Fine tune the model to optimize the setting of the process input X to get the best results for process out put Y through prediction modeling
  6. Validate through another series of experiment -  Validate the optimum setting and measure process output Y,  Check if the actual results against prediction results.
So far I have not really met any real DOE expert in computer component manufacturing industry which  I have deal with,  and  there are many processes had never been able to optimize their output,Y, due lack of expertise to really understand and able to conduct a true DOE.  I have seen many pitfalls in design of experiment in the following areas :-

  1. Trial and errors,  wild guess methods had been mistaken as DOE method
  2. Unable to measure process output correctly and there are measurement error associated to measurement process
  3. Do not separate controllable and uncontrollable factors
  4. There are no real modelling been done to be able to conclude which parameter.
  5. Did not use statistical software to do prediction modeling
  6. Use the wrong prediction modeling such as process output with binomial distribution (yield rate pass or fail) should use Logistic binary regression
  7. Sample size for DOE is not big enough to predict experimental errors contribution in an experiment.
  8. Do not understand how to analyze interaction effect between factors.
  9. Jump into conclusion after running the 1st experiment and did not do consecutive experiments such as reduction, optimization and validate all the findings.
  10. The actual model did not fit prediction model and there are no attempt to understand why the model did not fit such as the factors X selected does not have impact on response Y or  the impact of controllable factor is more than controllable factor.
Design of experiment is a very powerful tool which enable a manufacturer to understand, optimize and control the vital few process inputs, X to obtain a desirable process output, Y.  This require systematic approach to define, conduct and analyse the experiment and its results under the supervision of a DOE expert.

Why Traditional Statistical Process control Monitoring does not work anymore – Part 2

International Quality Institute in US had innovated the SPC monitoring techniques known as short run SPC to address the needs of current production line with low volume high mix.   In this technique, one chart can be used across different models with different process center and control limits such as part A, B and C shown in chart below.



Short run SPC method transform data collected and can use predetermined the control limits to enable different process center and even standard deviation product can be plotted in one chart.  This is applicable for 
  1. Same model product  with different lot to lot process center 
  2. Different model product with different process center


There are 2 types of data transformation in short run SPC to get the plot points

  1. Target method - Calculating actual measurement readings and calculate the deviation from target point (either using specification or process control) for average chart
  2. Standardized method - Nominal transformation of actual measurement data to plot point for both average and range chart.


Unfortunately short run SPC is still not a widely used technique especially in the world of electronics part manufacturing which could due to lack of true quality engineering expert.  There is also limitation in software as currently there is only one commercial statistic software which is capable of plotting short run SPC, Stat soft Statistical.   

It is imperative to be able to monitor the critical quality parameter correctly to ensure that product  quality parameter is reflecting the actual product quality per customer requirement which is the method to consistently good quality product.  Short run SPC is one of the techniques that work in high complexity low volume environment.

Appreciation notes  :  I would like to thank my mentors in Dell who had introduced me to short run and enable me to go further in my journey of SPC discovery  and share with my audience.


Why Traditional Statistical Process Control Monitoring does not work anymore – Part 1

Statistical Process Control had been deployed for almost 100 years in many established manufacturing organization such as Western Electric, General Electric etc. to monitor their process for any potential special cause.  However for the past 20 years, I found that many manufacturing organizations which produce computer/IT gadgets parts had been struggling to use Statistical Process Control as process monitoring tool mainly due to the chart exhibit too many false alarm out of control situation.

So what had changed over the past 20 years?  The boom of  computer industry had monopolize the manufacturing eco system globally with supply chains that are much more complex and longer compare to other types manufacturing industry such as automobile.  In order to stay competitive, IT gadget suppliers such as computer or hand phone companies had offer many options which include different shape and size of product to satisfy myriads consumer preference in the market.  Gone are the days where a factory shop floor can run products for a few weeks or even months without changing model.  There were some manufacturers continue to run a single product till end of life such as model T Ford automobile in the last century!

Today, daily or even hourly model conversion had become a norm in many manufacturing industry with high complexity and low volume to cater for different consumer needs.  Manufacturers are making smaller lot size where some could be less than 100 pcs batch and the batch run only last for an hour or less.  If the control chart sampling frequency to collect data is 5 pcs per hour,  then only one data point manage to be collected.  There is even not enough data point to calculate control limits. 


Most manufacturing line use 2 approach  in traditional SPC to manage frequent model change and both has its own problem:

Traditional SPC method
Type of run
Problem
Use  one chart for one model or same part number
Single production run. 
There is not enough point to read the potential out of control situation where some out of control situation need about 7 points. 
Increase sampling frequency is not an answer as it will increase the cost of production. 
Use one chart for one model or same part number.  Whenever there is a production run for a part number, the same chart will be used to plot the SPC point. 
Multiple production runs
There will be too many false alarms such as center line shifted due to different production lot throughout the supply chain. 

Below is an example of notebook cover length data collected on different production run.  It is really impossible to apply conventional SPC method to monitor the process as there are shift of trend with each new production run. 

This chart show that there are parts from different batches in different date with different process center and possibility  different data spread.

In traditional SPC method,  the manufacturing engineer could have use one chart for one single lot date code run. This is not a practical method as there would too many charts to monitor for a single product.  

We shall look at how we can simplify SPC chart in the part 2 of this article for  such scenario where parts with different process center and even data spread for different production lot.

Monitoring Process Output using Statistical Process Control Method

One of the most effective ways to monitor a key process output performance is using statistical process control (SPC) chart.  This method was invented by Dr Shewart nearly a century ago and it is the most frequent used process control method with some enhancement over times.  Control chart is actually a run chart with a calculated upper, lower control limit and process average from the actual key process output. The control limit must be calculated under the influence of stable common cause variations.  The 3 lines,  upper and lower control limits and average will be used as a guide post to show the presence of  special cause variation in the process which required immediate attention.  The interpretation of  the control chart is available in my previous article (http://www.360qualitymanagement.com/2017/09/how-to-detect-common-cause-and-special.html).  In order to have an effective SPC to monitor the process,  manufacturers  must  create a proper procedure on how to manage SPC implementation.

After auditing more than 100 suppliers’ sites which produce electronics parts (first and subtier) across the globe, I had NOT seen a decent SPC procedure yet.  Some companies does not even had SPC procedure and some companies SPC procedure only contain  text book  information on the type of control chart and how to plot control chart with calculation of control limits.   SPC procedure is NOT about how to plot control chart, it is should be about how to plan, implement and manage SPC within the process.  Each site/company should establish its own SPC procedure and not make copycat procedure from SPC textbook.


The table below shows some of the recommended content for SPC procedure in detail :-

Areas
Details
Identify person in charge of SPC.
There must a dedicated department or at least dedicated team who is responsible for the implementation of SPC
SPC training for employees
Outline the SPC training curriculum for different level of employee in the company such as operator, technician, engineer and even management.
Select the parameter needs to be controlled

The most effective method to determine which process output parameter should be monitored and controlled would be through failure mode effect analysis (FMEA) technique.  FMEA is a systematic prediction use to identify potential failure which will impact customer and what are controls needs to minimize or eliminate those failure risk.  There also other method which include product mapping, brain storming etc. 
Setting up of control chart, chart selection, calculation of control limit, rationale subgrouping.
Select the most suitable type of chart (attribute or variable chart), rationale subgrouping by category (either by machine or line or tooling) and subgroup size follow by control limit calculation.  
There were a few good article rationale subgrouping by Dr DJ Wheeler in the internet
The  SPC control point must updated into the process management plan
Manage control limit

The responsible SPC person must be able to determine when to fix a control limit according to the process nature.   Normally it is recommended to study the trend of 100 subgroup points under the influence of common cause variations before fixing the control limit.
Once control limit is fixed, there should NOT be any revision on control limit done unless there is major improvement in either one or a few process input. 
Define out of control (OOC) rules

Not one manufacturing process in the world are able to use all the 7 Western Electric rules as it would be too complicated to control the process and there will be too many false alarm.  Normally I would recommend companies to use only 2-3 rules to avoid complication
Reaction plan when there is OOC
If there out of control trend per the company define out of control rules, there should be an investigation conducted to check if there is presence of special cause.  Efforts should be taken to eliminate unwanted special cause and to bring the process under the influence of only common cause.  There must be clear ownership to drive the problem to closure preferably through the company correction action request system.  Refer to my previous article on corrective action (http://www.360qualitymanagement.com/2017/09/various-special-cause-problem-solving.html)
Review the effectiveness SPC chart
Check and balance to ensure SPC s implemented correctly and effective
There should be review conducted monthly or quarterly to ensure the effectiveness of the SPC :-
False alarm are  within preassigned goal for false alarm
SPC chart is effective to catch special cause defect, through correlation of the any special cause which happen. 

Ironically there are many companies create SPC chart just to full fill customer requirement of using SPC chart to monitor the process.  Upon a closer look at the chart, there are so many faults associated to the control chart such as:-

  1. Control limit are actually spec limit, 
  2. Out of control trends/points not investigated, 
  3. Chart is show cyclic trend of up and down due to wrong subgroup category
You can gain more insights on how to use SPC as an effective process monitoring system by taking this course : - 
Click on this image for course URL 👇



If a SPC chart does NOT serve its purpose to detect a special cause variation, then it would be better to just remove the chart totally rather than to waste resources to maintain and print the chart which does not bring any value!

Process capability index show 1.33 during pilot run; still have more than 10% reject rate

When I was working for a multinational corporation as a supplier quality engineering manager,  I had seen many cases where procurement  are struggling in getting consistent supply from some key component supplier even though they meet the goal of 1.33 for key parameters.  The reason given by the supplier was they have poor yield rate of less than 90%. In my previous article, we have learnt that process capability index number actually correspond to potential reject rate percentage.  If the process capability index is more than 1.00, there should be less than 0.27% reject rate or more than 99.73%  good parts.   So by right if supplier reported their process capability index PK as 1.33 this means they have about 99.99% yield rate.    So where are the gaps? 
Since the reject rate is estimated from a sample, therefore we will not have an exact match; however it should be close such as less than 0.1% reject rate.   There are a few reasons why the reported process capability index does not match or even come close to the projected reject rate :-

  1. Specification which is too wide.  The specification derived does not reflect with actual customer requirement, specification tolerance could be too loose.  When specification tolerance is too wide it would be very easy to achieve process capability index of 1.33
  2. Inaccurate quality metrics data.  In order to obtain the data to generate a process capability index, we measured the selected quality metrics and the measurement process contributed too much process variation.  Inaccurate measurement data will lead to inaccurate process capability index.(Refer to my article dated 28 Sep 2017 on the importance of good measurement process http://www.360qualitymanagement.com/2017/09/importance-of-performing-gage.html)
  3. Bias Sampling. Sample selected to calculate process capability index is NOT random sample and does not represent the actual population.   Almost all suppliers I have worked with had cherry picked parts during new product (NP) trial run stage which could meet the process capability index goal of 1.33.  Later in actual production they have high reject rate  which could be >10% and have trouble in meeting the delivery schedule
  4. The sample during NP stage violated the following assumptions for process capability to give a meaningful reject rate. The data is NOT normally distributed.  The data is NOT from a stable process which is free from special cause. 


Among the 4 reasons given on why there is a mismatch between Ppk value and reject rate,  the most common reason are related to cherry pick measurement data and the data is not normally distributed as a result of cherry pick.  Therefore validation must done on the process capability index report provided by process engineering or suppliers:-

  1. Measurement data gage repeatability and reproducibility - Ensure the measurement data collected is accurate and within the requirement of GR&R goal < 10% to 30%.  Request all the raw GR&R measurement data from supplier/process and check the data using statistic software
  2. Plot a histogram or distribution chart on the measurement data given for at least 30 samples and check the distribution.  If you get the distribution pattern other than figure 1, then most probably screened data had been used.  This type of data usually does NOT represent the population distribution, therefore process capability index value generated is does not give an accurate projection on the reject rate.




In order to get a meaningful process capability index value which reflect the actual quality of the process, we must ensure that the sample must represent the actual population.  Remember that we will never know the true population performance and we are relying on sample to make a correct inference on the population.  This is also the case with process capability index.

Process output monitoring: What does Process capability index > 1.33 mean

By now my reader should have a better understanding that what is variations, process,  its input and output. In order to achieve consistent good quality product from a manufacturing process, it is imperative to manage and control all process inputs – man, machine, method, material, measure and environment.  The next question is, after we control all process inputs, how do we know if we have produce consistently good quality part per customer specification or requirement.  The only way to know is to measure the output produced and collect measurement data for analysis.

One of the widely use measurement data analysis method to determine product quality is process capability.  In this method process output are measure for its quality characteristic such as dimension and compare the distribution of data with a predetermine specification.  Specification is either given by customer or derived base on customer requirement. 

A simple example would be the molding process of hand phone plastic cover.  The quality metrics in this case would be dimension such as length or width of the cover to ensure it can fit properly to the LCD assembly to become a complete hand phone assembly.  If the length specification provided by the design team is 140 -145 mm with 142.5mm is the target, the molding process need to produce part which is between 140 to 145 mm.  Since it is not practical to measure every part length, therefore a sample which must represent the population need to be measured to check if the length dimension between 140-145 mm.  The recommended sample size is at least 30.  Once data is collected then a histogram chart is plotted base on the data, which usually will form into a bell shape curve known as normal distribution per figure 1.  Assume that the current process is able to produce most part center to the target value of 142.5 mm, therefore  will be peak  around 142.5 mm. The peak where most of the data are center is known as central tendency in descriptive statistic.  Then we will compare the 6 sigma process spread with the specification tolerance.    
   
Figure 1 Normal distribution of measurement data length

If the process spread is less than specification tolerance in per figure 2 then there are higher chances hwere most of the parts will meet specification.  The comparison of specification tolerance with process spread is known process capability index, Ppk or Cpk.   If the process spread is more than product specification as in figure 3, then anything outside the product specification is consider as reject.  The reject rate will be higher in this case compare to figure 2.

Figure 2.  Process spread width is smaller than specification width,
almost all parts are in specs

Figure 3.  Process spread width is bigger than specification,  
there are parts that are out of spec

The process capability can be used to estimate the manufacturing process reject rate.  The universal accepted process capability index ratio between specification tolerance and process spread is 1.33. Below are more commonly used process capability and their corresponding potential reject rate for the measured quality metrics.  Commercial statistic software will be able to compute the estimate total reject rate once the process capability is generated.




In order for the process capability index number to give a meaningful estimate of the population reject rate there are 3 conditions which must be full fill :-

  1. The data should be a variable data ( refer to my blog dated 5 Oct 2017  http://www.360qualitymanagement.com/2017/10/essential-data-collection-for-quality.html)
  2. The data must be normally distributed
  3. The data must derive from a stable process which is free from special cause.  (refer to my blog  dated on 8 Sep 2017 http://www.360qualitymanagement.com/2017/09/understanding-process-variations-in.html)

In most cases it is impossible to measure every single production part which gives an accurate reject rate, therefore we need to use the ratio of specification tolerance to actual process spread, Ppk to estimate the total production output reject.  There are many organizations set the goal of process capability, Ppk goal of 1.33 for product output parameter, however a lot of them do not know the actual meaning of Ppk 1.33 and much less able to full fill the 3 conditions above to give meaningful estimation of reject rate for a process.

Note: 
Please note that this article does NOT give the technical or calculation of process capability.  There are many sources which is able to furnish this information.  The intent of this article reiterates the translation of process capability number into a practical conclusion which management understands.

Is six sigma approach a SCAM?

In this article I will be opening a Pandora box on why six sigma failed in recent times after its proven success in the last century. 

In the late 80s Motorola pioneer an iconic problem solving technique name six sigma, a quality improvement methodology through variation reduction with goal to achieve 99.99966% acceptance.  This mean the process will produce part within 12 sigma range of a normal distribution and reject rate is about 3.4 dppm.  It was indeed an epic achievement given the limitation of the design and manufacturing technology at that time.  With such low reject rate, Motorola boasted cost saving which means more profit.  Fast forward to this millennium, the once glory company had to be taken a part due to billions USD lost for a few consecutive years.
  
There is also news on several companies which had failed even with deployment of six sigma:- 

  1. GE - Under leadership of Jack Welch, the company had adopted six sigma as the business management strategy and unfortunately almost 60% of GE Six sigma initiatives failed to attain the desired goals.
  2. Ford -  Suffer from losses even they had deploy Six sigma and design for six sigma (DFSS)
  3. Home Depot - Their former CEO Robert Nardelli was ousted due to his obsession with six sigma methodology to be used for solving every problem.  This had cause misery for the worker which directly impact consumer since Home Depot is a retail business.
  4. 3M - When a former GE executive become the CEO of the company, he instill six sigma methodology in all area include design.  The design had claimed that six sigma had  hinder creativity and innovation


With all above mega corporations’ failure, could it be attributed to six sigma actually does not work or is there other reasons.  I had managed to think of a few reasons for the failure of six sigma initiative :-


  1. Top managements that are too rely on six sigma as a business management tool.  Six sigma does not yield good results in a business environment as there too many uncontrollable factor which is not possible to address.  There are instances where we had identify the causes or factors which could impact an output, however those factors are uncontrollable which mean it is near impossible to have action plan.  In business situation, uncontrollable factors are more prevalent than controllable factors.  Therefore six sigma is cannot never effective in producing desire results when there are more uncontrollable factor* than controllable factor.
  2. One shoe fits all attitude could be because six sigma could be the only problem solving approach a company executive know without in depth knowledge what six sigma is all about. They expect all employee must spend time to force fit six sigma technique to all problems.  This had cause misery to employees.
  3. Lack of competent six sigma champion or top management to lead the quality improvement initiatives.  I had seen organization  hire incompetent  six sigma director/ champion who do not know what they are doing or does not have good knowledge in statistic.  Statistic is the soul of six sigma.  Most of these organization does not have expertise to validate the competency of the six sigma champion they are hiring.  In turn this type of six sigma champion end up unable to guide employee to use correct tools to effectively resolve problem.  There are instances that some champions  insist on using all tools that had been taught in six sigma in a variation reduction project!!
  4. Inaccurate data which lead to wrong solution.  A lot of people had forgotten the fundamental of six sigma is all about collecting accurate data.  Lack of discipline worker who does not goes all length to collect and validate data accuracy on the sample to enable an accurate prediction of population behavior.  
  5. Poor support from executive management level and expect the six sigma champion to kick everyone to participate in the six sigma initiative.   
  6. The design people claim to be restricted by six sigma where they have to play by the rules to ensure manufacturing is able to produce six sigma quality.  In actual fact, design is one of the major culprit of quality issues which had cause manufacturing process unable to produce consistent good products. 


Six sigma approach is still one of the most powerful tool to achieve consistent quality by reducing variation in the process input which could impact product quality.  It must be applied correctly with accurate measurement of quality metrics with  data collection,  understand the influence of controllable and uncontrollable factors through root cause analysis before assign any action plan to for improve.  If six sigma is applied correctly, it will not only reduce product quality variation, it can also predict future population quality as well.

Side note*

One of the most effective ways to manage uncontrollable factor is through unconventional knowledge such as using Chinese metaphysics to predict business outcome.  Since I am also a practitioner of Chinese metaphysics, I have clients who request my service to predict business outcome using tools in Chinese Metaphysics. Feb 4 signifies the arrival of spring in Chinese solar calendar and a new beginning.  Would like to wish all my reader have a great year ahead in 2018 and thanks  for the support to my website.

Sharing is caring

Continuous Improvement Program CIP - 6sigma Methodology