Content Menu
● What Actually Drives CNC Machine Utilization During Ramp-Up
● Step-by-Step Forecasting Process That Works in Real Shops
● Practical Forecasting Methods
● Best Practices and Pitfalls to Avoid
● Q&A: Questions Manufacturing Engineers Ask Most
New product launches in CNC machining environments almost always create capacity headaches. The sales team commits to aggressive timelines, engineering releases models with features nobody has cut before, and the shop floor has to figure out where the hours will come from on machines that are already booked solid with existing work.
The core problem is simple: until the new part actually runs in production, nobody knows the real cycle time, setup frequency, or scrap rate with certainty. A 20% underestimate on a high-volume program can force overtime, delayed shipments on other customers, or emergency capex that nobody budgeted for. A 30% overestimate ties up capital in idle spindles or pushes work to higher-cost suppliers.
This article lays out practical methods that hundreds of job shops, contract manufacturers, and OEMs use today to forecast equipment utilization accurately enough to make good decisions during NPI. The approaches range from dead-simple analog comparisons that any supervisor can do in Excel to regression models and discrete-event simulation used by larger operations. Everything here is based on real implementations I have seen or helped build in aerospace, medical, automotive, and oilfield machining companies.
Utilization = (cutting time + setup time) / available time
For mature parts the numbers are reasonably stable. For new parts the variables swing hard:
A contract manufacturer I know introduced a new 7075 aluminum avionics chassis. Engineering quoted 185 minutes on a Matsuura 5-axis. First production lot averaged 312 minutes once they added helium leak-test probing, extra spring passes on thin floors, and changed to a different insert geometry. That single part pushed three machines from 74 % to 96 % utilization overnight and cost them two other programs.
Another shop making Inconel 718 turbine seals estimated 42 minutes per part on a new mill-turn. After switching to ceramic roughers and adding balanced toolholders they dropped to 26 minutes. Suddenly they had capacity to spare and took on an extra 1.2 million dollars of work that year.
Accurate forecasting is the difference between those two outcomes.
Start with the solid model and routing before the part ever hits the floor.
Most programmers today run a quick CAM simulation (NX, Mastercam, hyperMILL, etc.) with conservative feeds and speeds. Add 8–12 % for acceleration/deceleration losses and tool-change time that many simulations ignore. That gives a baseline.
Then compare against the analog library (more on that below). If the new part is within 15 % of removed volume and feature count of past jobs, trust the historical average more than the fresh simulation.
First-article prove-outs and low-rate runs kill utilization. Typical numbers I see:
Use a 75–80 % learning curve for setups (each doubling of cumulative pieces reduces time by 20–25 %). Do not assume mature setup time in month one.
Almost no program goes from 5 pieces to 500 overnight. Common patterns:
Plot the expected units month-by-month and multiply by the maturity-adjusted cycle time for that month.
Add 10–20 % extra time for the first 3–6 months to cover scrap and rework. I have never seen a new part hit 98 % first-pass yield immediately unless it is a direct copy of an existing one.
Maintain a spreadsheet or simple database of every part run in the last 3–5 years with:
When a new part arrives, find the three closest matches (Euclidean distance on normalized features works fine). Average their mature cycle times, then apply maturity factors:
200 pieces: 0 %
A 120-person aerospace shop in California gets ±9 % average error across 200+ new programs per year using nothing more than this method in Excel.
Extract features automatically (NX Feature Recognition, FeatureCAM, or custom PowerQuery scripts) and run a multiple linear regression:
Cycle_time = a + b×Volume_removed + c×Number_holes + d×Surface_area + e×Max_depth + …
Add categorical variables for material and machine type. Shops making families of similar parts (camera housings, valve bodies, gearbox cases) routinely hit R² > 0.88.
For larger cells or high-capex decisions, build a FlexSim, Siemens Plant Simulation, or Visual Components model. Feed it:
Run scenarios for low/medium/high volume cases. A pump manufacturer avoided buying a $1.4 M gantry mill by proving they could move impeller roughing to underutilized 3-axis machines at night.
Take the point estimates above, assign triangular or normal distributions (e.g., cycle time ±25 %, volume ±30 %), and run 5,000–10,000 iterations in Excel or @Risk. Report P10 / P50 / P90 utilization numbers to management. Anything pushing P90 above 85–88 % triggers action.

Three new 7075 wing ribs threatened to overload four 5-axis routers. Initial estimates showed need for another machine. Analog method plus trochoidal roughing trials cut projected time 38 %. Final utilization peaked at 82 %; capex deferred two years.
Swiss contract manufacturer won a major revision knee program. Regression model predicted need for two extra Hermle C42s. By consolidating turning and milling onto Integrex i-400ST machines and adding Erowa pallet pools they absorbed the entire 3× volume increase on existing footprint.
German Tier-2 underestimated hard-turning time on a new gear shaft by 45 %. Utilization spiked to 98 %, overtime exploded, and they lost a key customer. Response: mandatory 20-piece pilot run with measured times before any capacity commitment. Forecast error now <11 %.
Do these:
Avoid these:
Capacity forecasting for new products in CNC machining is never going to be perfect, but it does not have to be a coin flip either. Shops that treat it as a disciplined process — starting with solid analogs, layering in learning curves and yield buffers, validating with pilot data, and quantifying risk with Monte Carlo — consistently hit their launch windows without surprise overtime or delayed capex.
The winners today are not the shops with the most machines; they are the ones who know, with quantified confidence, exactly how many spindle hours they have and how the next new program will consume them. Implement the methods above and you will move from reacting to capacity fires to making calm, data-backed decisions that keep both customers and the balance sheet happy.