6 170 Lecture Sofware Development Process

Views:
 
Category: Education
     
 

Presentation Description

No description available.

Comments

By: tiendq (108 month(s) ago)

Really useful

By: puwasuba (111 month(s) ago)

Please be kind enough to make facility to download this presentation for my referances

By: puwasuba (111 month(s) ago)

Astream guide me a lot through valuable presentations

Presentation Transcript

Software Development Processes & Practices: 

Software Development Processes andamp; Practices Michael A. Cusumano MIT Sloan School of Management cusumano@mit.edu © 2006

Slide2: 


Different Cultural Orientations: 

Different Cultural Orientations EUROPE: Software as a Science Formal Methods, Object-Oriented Design JAPAN: Software as Production Software Factories, Zero-Defects INDIA: Software as a Service Infosys, Tata, Wipro, Satym, Cognizant, Patni The USA: Software as a Business Windows, Office, Navigator, $$$$

Problems in Software Development: 

Problems in Software Development Similar problems recurring since the 1960s 1969 NATO Report on Software Engineering: Documented problems in requirements, design vs. coding separation estimates, monitoring progress, communication productivity (26:1), metrics, reliability (bugs) hardware dependencies, reuse maintenance costs Sound familiar??

Industry Reports (Standish Group): 

Industry Reports (Standish Group) Only 9% of IT projects in large businesses completed on time and on budget; 25% in small businesses (better, but still…) 31% of projects cancelled before completion 53% of projects cost 200% of original estimates Projects in large US companies delivered with only 42% of originally proposed features Caveat: Are most firms bad at scheduling, estimation, and handling design changes?

Attempts to Deal with Problems: 

Attempts to Deal with Problems Variety of industry efforts IBM-style software engineering (1960s, 1970s) Japanese 'software factories' (1970s, 1980s – stable teams, standard process andamp; tools, reuse) SEI Capabilities Maturity Model CMM (1980s to present) 'Iterative' and 'Agile' methods (US, Europe) But no one process perfect for all projects Variations: business models, customer requirements, application domain, competition, pace of change, local culture??, etc. Common Theme: How balance product quality, features andamp; design flexibility, with project cost andamp; speed?

Some Success Data (H. Krasner, U. Texas): 

Some Success Data (H. Krasner, U. Texas) SEI Surveys of 13 Leading Organizations 3.5 years in average Software Process Improvement (SPI) program, primarily SEI CMM $1375 avg. cost per year per software professional Annual productivity gain of 37% Annual post-ship defect reduction of 45% Average reduction in time to market of 19% Average ROI of 5.7 times Long-Term Benefits of SPI (4-10 years) 200-300% improvement in productivity Up to 10x improvement in quality Time to market cut in half

More Success Data (H. Krasner, U Texas): 

More Success Data (H. Krasner, U Texas) Motorola: Defects/M-SLOC dropped from 890 at CMM level 2 to 126 at CMM level 5; cycle time from 1.0 at CMM level 1 to 7.8 at CMM level 5; Productivity 1.0 at CMM level 2 to 2.8 at CMM level 5 Lockheed: CMM level 3 projects 3-5 times more productivity than level 1 projects HP: 10x reduction in defects, time to fix defects cut in half IBM Toronto: 10x reduction in defects; productivity increase of 240%; rework reduced by 80% TI: 10x reduction in defects over 3 years; 60% productivity improvement over 2 years; 12% annual cycle time cuts Bosch: 10x reduction in errors over 2 years

But Different Process Philosophies: 

But Different Process Philosophies Waterfall-style (sequential, 'Stage-gate') versus Iterative-style (incremental, evolutionary) Spiral Rapid Prototyping Synch-and-Stabilize (Microsoft, PC makers) IBM Rational’s Unified Process andamp; Toolkit HP’s Evo Process (short cycles of mini-waterfalls) Extreme Programming (XP), SCRUM, AGILE Many other variations at companies

Traditional Waterfall Model(One development cycle): 

Traditional Waterfall Model (One development cycle) Requirements  Functional design  Detailed module design Module construction  Module construction  Module construction  Integration/system test  Module rework (debug)  Re-test, debugging  Product release 

Frequent Waterfall Result: 

Frequent Waterfall Result Requirements  Functional design  Detailed module design           Module construction   Module construction   Module construction      Integration/system test If modules change a lot, integration fails and you can experience an infinite defect loop.

Some Basic Tensions: 

Some Basic Tensions Requirements: How determine what the individual customer (custom systems) or the market (product projects) wants? Need to show potential customers the design as it is evolving? Design: How determine if the design implements the requirements? But how determine if the requirements are the right ones for the market? Integration: How efficiently integrate and test design changes, bug fixes, or functions added late in a project?

Reality: Spectrum of ApproachesProcess Choices for Different Projects : 

Reality: Spectrum of Approaches Process Choices for Different Projects Need for User Feedback During Project Uncertainty in Requirements High Low # of Subcycles, Releases 1 Many Waterfall, 'Factory-like' Approaches Rapid Prototyping XP  Agile or Iterative Methods  Incremental Formal Methods?

Slide14: 

Msft Iterative Style ('Synch andamp; Stabilize') Traditional Waterfall-Style

International Comparisons (2003 IEEE Software, “Software Dev. Worldwide”): 

International Comparisons (2003 IEEE Software, 'Software Dev. Worldwide') Survey: Completed in 2002-2003, with Alan MacCormack (HBS), Chris Kemerer (Pittsburgh), and Bill Crandall (HP) Objective: Determine usage of iterative (Synch-andamp;-Stabilize) versus Waterfall-ish techniques, with performance comparisons 118 projects plus 30 from HP-Agilent for pilot survey Participants India: Motorola MEI, Infosys, Tata, Patni Japan: Hitachi, NEC, IBM Japan, NTT Data, SRA, Matsushita, Omron, Fuji Xerox, Olympus US: IBM, HP, Agilent, Microsoft, Siebel, ATandamp;T, Fidelity, Merrill Lynch, Lockheed Martin, TRW, Micron Tech Europe: Siemens, Nokia, Business Objects

“Conventional” Good Practices: 

'Conventional' Good Practices

“Newer” Iterative Practices: 

'Newer' Iterative Practices

“Crude” Output Comparisons: 

'Crude' Output Comparisons

Observations from Global Survey: 

Observations from Global Survey Most projects (64%) not pure waterfall; 36% were! Mix of 'conventional' and 'iterative' common -- use of functional specs, design andamp; code reviews, but with subcycles, regression tests on frequent builds Customer-reported defects improved -- over past decade in US and Japan; LOC 'productivity' may have improved a little Japanese still report best quality andamp; productivity -- but what does this mean? Preoccupation with 'zero defects'? Need more lines of code to write same functionality per day as US andamp; Indian programmers? Indian projects strong in process and quality -- but not as strong as their dominance of CMM Level 5 suggests??

Hewlett Packard Pilot Study (2003 IEEE Software, “Tradeoffs” article): 

Hewlett Packard Pilot Study (2003 IEEE Software, 'Tradeoffs' article) Managers – When use iterative or waterfall? Survey: 35 responses, 29 projects with complete data Median – 170K LOC, with 70K new code; 9-person team, 14 month projects 59% applications, 38% systems, 28% embedded 74% of variation in defects explained by early prototypes, design reviews, and integration/regression testing on builds Median project: 40% of functionality complete when first prototype released and 35.6 defects per million (.04/1000) LOC, reported by customers in 12 months after release, and 18 LOC per person day (360/month)

Simple Correlations: 

Simple Correlations *pandlt;10%, **pandlt;5%, ***pandlt;1%

Best-Fit Multivariate Models: 

Best-Fit Multivariate Models *pandlt;10%, **pandlt;5%, ***pandlt;1%, ****pandlt;0.1%

Multivariate Regression Analysis: 

Multivariate Regression Analysis Some striking results, compared to median: Releasing prototype earlier with 20% of functionality  27% reduction in defect rate Integration/regression testing at each code check-in  36% reduction in defect rate Design reviews  55% reduction in defect rate Releasing prototype with 20% of functionality  35% rise in LOC output/programmer Daily builds  93% rise in LOC output/programmer

Observations from HP Survey : 

Observations from HP Survey Best 'nominal' quality from traditional 'waterfall' (fewer cycles andamp; late changes = less bugs, of course!!) Best balance of quality, flexibility, cost andamp; speed from combining conventional andamp; iterative practices BUT: Differences in quality between waterfall andamp; iterative disappear if use a bundle of techniques: Early prototypes (get customer feedback early) Design reviews (continuously check quality of design) Regression tests on each build (continuously check quality of code and functional status)

Where Iterative May Work Better: 

Where Iterative May Work Better Fast-paced product markets where technologies, competition, and product features may have to change a lot during a project. Product or custom projects where customers place very high emphasis on leading-edge features. Custom systems where customers place high emphasis on responsiveness to their input during a project. Product or custom projects that require experimentation as in lots of short design, build, and test cycles. Other?

Where Waterfall May Still Work: 

Where Waterfall May Still Work Extremely high-reliability systems (product or custom projects), where functions are very well understood and no changes in requirements during a project are desired Embedded products with hardware constraints that cannot be easily changed Contract software where client requires a detailed proposal upfront and not possible to limit scope/phases Complex projects (product or custom) where parts of design andamp; coding are outsourced, off-shored, or done in multiple sites, AND there are weak mechanisms to synchronize and manage distributed teams

Making Large (& Distributed?) Teams Work Like Small Teams: 

Making Large (andamp; Distributed?) Teams Work Like Small Teams Project Size and Scope Limits (Limits on Product/Project Vision; Manpower andamp;Time) Divisible Product Architectures (Modularization by Features and Objects) Divisible Project Architectures (Feature Teams andamp; Clusters, Milestone Sub-Projects) Small-Team Structure andamp; Management (Small Multi-Functional Groups; High Autonomy and Responsibility)

Making Large Teams Work Like Small Teams continued: 

Making Large Teams Work Like Small Teams continued A Few Rigid Rules to Force Coordination andamp; Synchronization (Daily Integration Builds, Milestone Stabilizations, Rigorous Bug-fixing Rules) Good Communications-- Functions andamp; Teams (Shared Responsibilities, One Site, Common Language, Non-Bureaucratic) Product-Process Flexibility for Unknowns (Evolving Specs, Buffer Time, Evolving Process)

Windows Longhorn Debacle: 

Windows Longhorn Debacle Windows architecture challenged as old code added to NT code base to achieve compatibility with old Windows/DOS applications Problems intensified in late 1990s as Windows got larger, buggier, more spaghetti-like (probably due to anti-trust strategy of 'integrated innovation,' sloppiness in design, cramming too much into one product release) Longhorn 'gridlocked' when daily builds began to fail routinely with 3500 programmers trying to check in code for large number of complex new features to a spaghetti architecture – too many unforeseen dependencies, ongoing bugs, security problems, etc.

Microsoft Improvement Efforts: 

Microsoft Improvement Efforts 1998 -- effort started in Microsoft Research to develop and deploy better programmer support and testing tools more widely in company 2003 -- Center for Software Excellence established in Windows team from research effort Longhorn/Vista countermeasures introduced that tweaked the product and process, especially adding much more automated testing, sort of like a 'clean room' environment New leadership (process and executive mgt.)

Windows Vista Countermeasures: 

Windows Vista Countermeasures Threw out Longhorn code base and went back to Windows 2000/03 XP server code base – smaller, more modular, less buggy Postponed major new proposed features, rewrote as tighter modules, folding into Vista as incremental builds and releases Introduced builds by 'team branches' and then integrate into 'main branch' (which Office team did for Word, Excel, PowerPoint)

Vista Countermeasures cont’d: 

Vista Countermeasures cont’d Introduced new 'lightweight' tools to check automatically for wider variety of errors (code coverage and correctness, APIs/component architecture breakage, security, dependencies, memory use) and automatically reject code at desktop builds and branch check in Heavyweight tools at main branch More design andamp; code reviews, new process leadership team (head formerly from Office – Jon De Vaan) New exec management from Office (Steve Sinofsky) Windows team probably capped at present level

Some Comments on Iterative Practices Used in Industry: 

High-Level Process Innovation andamp; Design Architecture Team Management Project Management Testing andamp; QA Some Comments on Iterative Practices Used in Industry

High-Level Process Strategy: 

High-Level Process Strategy 'You can’t do anything that’s complex unless you have structure.' For creative people, that structure should be as subtle as possible.

Dave Maritz Test Mgr, Windows 95: 

Dave Maritz Test Mgr, Windows 95 In the military, when I was in tank warfare … there was nothing more soothing than people constantly hearing their commander's voice come across the airwaves. Somebody's in charge, even though all shit is breaking loose.... When you don't hear [the commander’s voice] for more than fifteen minutes to half an hour, what's happened? Has he been shot? Has he gone out of control? … You worry. And this is what Microsoft is. These little offices, hidden away with the doors closed. And unless you have this constant voice of authority going across the email the whole time, it doesn't work.

Martiz continued:: 

Martiz continued: Everything that I do here I learned in the military.... You can't do anything that's complex unless you have structure .... And what you have to do is make that structure as unseen as possible and build up this image for all these prima donnas to think that they can do what they like. Who cares if a guy walks around without shoes all day? Who cares if the guy has got his teddy bear in his office? I don't care.... [But if] somebody hasn't checked in his code by five o'clock, then that guy knows that I am going to get into his office.' From Microsoft Secrets

High-Level Process cont’d: 

High-Level Process cont’d Structure: Process must fit the product andamp; market/customer Nature of the Product or Service Mission critical vs. not; new vs. derivative Infrastructure vs. applications, embedded, etc. Packaged vs. custom, or multi-system solution Nature of the Market or Customer: Speed andamp; innovation for leading-edge markets Quality andamp; support for enterprise andamp; mass mkts

Innovation & Design Strategy: 

Innovation andamp; Design Strategy Being 'slightly out of control' can stimulate innovative thinking and creativity. But too few controls  chaos or 'rocket science' Iterative a middle approach: Vision statements, outlines of functional specs, prototypes, early beta releases, multi-version release mentality, etc. Late design changes can be good: respond to user feedback, competitors’ moves, or unforeseen changes in the market andamp; technology.

Innovation & Design cont’d: 

Innovation andamp; Design cont’d Economies of scope: leveraging efforts across multiple projects or groups  reusable templates or platforms, components, and technologies, with joint design teams Reuse, but of packaged components, with minimal changes (e.g. no more than 10-20%) Buy or license or outsource, if there is a strategic logic (the most efficient way to write software may be not to write it yourself…)

Architecture Strategy: 

Architecture Strategy Definition: How to divide a product into subsystems and modules/components, and with what interfaces Strategy: degree of 'modular' vs. 'integral' degree of 'open' vs. 'closed' (proprietary) consider/beware of 'Open but not Open'! Modular andamp; 'horizontal' (feature-oriented) de-couples small development teams facilitates adding or cutting functionality facilitates reuse of components

Architecture Strategy: 

Architecture Strategy Tradeoff: Investment in architecture for the future versus new features for the present Try to design architectures that will last. Rush-jobs 'spaghetti' andamp; hard to modify Or, plan to evolve architectures incrementally

Michael Toy, Release Mgr for Navigator/Com 4.0: 

Michael Toy, Release Mgr for Navigator/Com 4.0 We wrote this code three years ago and its major purpose in life was to get us in business as fast a possible. We should have stopped shipping this code a year ago. It’s dead… We’re paying the price for going fast. And when you go fast, you don’t architect and so you don’t say, 'I want those three-years-from-now benefits.' You want to ship tomorrow... From Competing on Internet Time

Team Management Strategy: 

Team Management Strategy Small teams of great people work best Large teams can work like small teams (within limits – not Longhorn!) What you need: One strong person in charge! Map project architecture to product architecture feature andamp; subsystem teams Keep feature teams small  3-8 people. Overlap functional responsibilities (vision/scope, requirements, QA) Focus everyone on shipping the product!

Bob Lisbonne, Netscape VP Client Development: 

Bob Lisbonne, Netscape VP Client Development When our teams grew beyond a certain point, they began to resemble a 200-person three-legged race … That’s why the componentization or the modularization of the product is so key, so that ultimately we can get back to lots of small teams each doing their own thing… and not getting caught up in one another’s efforts. From Competing on Internet Time

Chris Peters, VP Microsoft Office: 

Chris Peters, VP Microsoft Office Everybody in a business unit has exactly the same job description and that is to ship products. Your job is not to write code, your job is not to test, your job is not to write specs. Your job is to ship products.... When you wake up in the morning and you come to work, you say what is the focus? Are we trying to ship? Are we trying to write code? The answer is we are trying to ship.... You're trying not to write code. If we could make all this money by not writing code, we'd do it. From Microsoft Secrets

Slide46: 

Microsoft Project Team Structure

Project Management Strategy: 

Project Management Strategy Avoid sequential 'waterfall' schedules (usually). Divide long projects into multiple sub-projects or milestones, of a few weeks or months duration. Evolve requirements incrementally: do spec design, development andamp; testing concurrently. Impose a few rigid rules to force frequent synchronizations and periodic stabilizations. Synchronize-and-Stabilize!

Project Management cont’d: 

Project Management cont’d Prioritize Rigorously: most important features first Schedule 'backwards' – people andamp; time Let engineers schedule their own tasks Managers keep historical data on estimates Prototypes and early beta releases = rapid feedback on designs and quality Use historical data to schedule buffer time (for projects, not individuals) for changes andamp; unknowns

Slide49: 

Synch-and-Stabilize Process

Synch-Up and Debug Daily: 

Synch-Up and Debug Daily Microsoft Secrets: Few rules, but 'military-like' discipline to force coordination andamp; communication Developers can check in when they want Each project must build daily Check-in at set times; can’t leave until build is OK If you break the build, you must fix your code now! Penalties for those who break the build Minimize build overhead through build team and automated tools Objectives: Synching-up: 5-20 minutes; Quick test: 30 minutes Total Check-in: 5-60 minutes

Slide51: 


Testing and QA Strategy: 

Testing and QA Strategy Try to build-in quality continuously design and code reviews gates or check points continuous customer feedback, etc. But continually integrate and test Frequent builds (daily, weekly) Especially check late design changes Test what? unit/feature testing, of course system integration testing -- from early on!

Testing and QA cont’d: 

Testing and QA cont’d Automate to test andamp; retest design changes quickly and frequently. But automation never eliminates the need for people. Someone has to write and rewrite (update) the automated tests. Also need human testers to probe real user behavior

Testing and QA cont’d: 

Testing and QA cont’d Post-mortems: what went well, what went poorly, and what the team should do next time. 'Eat your own dog food': first-hand feedback on products as quickly as possible. A few quantifiable metrics: to control and improve quality as well as monitor key product and process characteristics. Early beta releases: provide feedback on design as well as quality.

Concluding Comments: 

Concluding Comments No one 'best' software development process But 'waterfall' less responsive to change But using a 'bundle' of practices eliminates differences in quality between waterfall and iterative! Global teams may still complicate iterative process 'Better' depends on business, context, strategy Type of software, customer requirements, team experience and culture, contract needs, etc. Product vs. custom, mass-market vs. niche, individual vs. enterprise, leading-edge vs. follower, etc.

Main References: 

Main References The Business of Software by M. Cusumano (Free Press/Simon andamp; Schuster, 2004) Microsoft Secrets by M. Cusumano and R. Selby (Free Press/Simon andamp; Schuster, 1995 and 1998) Competing on Internet Time by M. Cusumano and D. Yoffie (Free Press/Simon andamp; Schuster, 1998) Michael Cusumano, Alan MacCormack, Chris Kemerer, and Bill Crandall, 'Software Development Worldwide: The State of the Practice', IEEE Software, November-December 2003. (International Comparisons) Alan MacCormack, Chris Kemerer, Michael Cusumano, and Bill Crandall, 'Trade-offs between Productivity and Quality in Selecting Software Development Practices', IEEE Software, September-October 2003. (HP Survey)

authorStream Live Help