Software Engineering Thoughts and Links

From NAMIC Wiki
Jump to: navigation, search
Home < Software Engineering Thoughts and Links

The following comments were posted by Dave Tuch on the namic-corepis mailing list and are reproduced here with permission.


Requirements engineering

"The hardest single part of building a software system is deciding what to build. ...No other part of the work so cripples the resulting system if done wrong. No other part is more difficult to rectify later."

  • Brooks, F.P. Jr. No Silver Bullet: Essence and Accidents of Software Engineering. IEEE Computer, 10-19, April 1987.

Requirements engineering is recognized as a fundamental part of software development. The need for requirements elicitation and management is especially pressing for the NCBC's given the large, cross-functional, geographically-dispersed nature of the centers.

See: http://en.wikipedia.org/wiki/Requirements_gathering http://www-106.ibm.com/developerworks/websphere/library/techarticles/0306_perks/perks2.html http://www.cs.toronto.edu/~sme/CSC2106S/

Requirements management should be seen as complementary to the spirit of scientific learning and discovery. We can manage requirements and still maintain a collaborative (as opposed to contractual) scientific culture by frequent, open intellectual exchange, and appreciating how requirements management can drive scientific innovation.

See

http://www.cmpevents.com/SDw5/a.asp?option=C&V=1&SB=4&AdS=1&scTKs=313&scFMTs=0&GetDaysC=0&SPids=0&CP1=0&scTKs1=0&scFMTs1=0&GetDaysC1=0

for some recent thinking on requirements management, particularly how requirements management relates to agile methods.


Expectations management

Software projects often fail because they do not meet (often unstated) measures of success harbored by key stakeholders. It's critical to elicit, clarify, and manage what stakeholder expectations are and incorporate those expectations into the requirements process.

See: http://sunset.usc.edu/classes/cs577a3_2003/coursenotes/ep/TheArtOfEM.pdf


Requirements elicitation v. requirements gathering

The truth is scientists and clinicians often don't know what they want from their software or they can't express it in a language familiar to software developers. We should focus on developing tools and processes for eliciting requirements from scientists and clinicians.

See:

http://www.sei.cmu.edu/pub/documents/92.reports/pdf/tr12.92.pdf http://www.cs.brown.edu/courses/cs190/2005/assignments/goguen%5B1%5D.pdf http://www.cs.toronto.edu/~sme/CSC2106S/slides/04-elicitation-4up.pdf


Use cases

Use cases are a powerful tool for eliciting and documenting functional requirements. We should promote their active use: http://www.bredemeyer.com/use_cases.htm#Functional%20Requirements http://www.cmpevents.com/SDw5/a.asp?option=C&V=11&SessID=4289


Measure what you care about

"If you don't measure it you can't manage it" - Drucker

Scientists care about papers, and grants, and citation numbers not only because they have intrinsic value, but also because they're a quantifiable measure they can use to gauge their productivity. We tend not to care about requirements satisfaction for scientific software b/c we don't measure it. Requirements satisfaction needs to me measured. For example, "On a scale of 1-10, how well does this tool address research (or clinical) need X?"


Processes

Culture and values are an integral part of team science yet are rarely identified as such. The software development community has done an outstanding job in identifying the role that culture and values play in the success of a project. Scientists and engineers could learn from how software developers openly identify, clarify, and manage cultural differences in order to achieve project success.

See: http://www.cmpevents.com/SDw5/a.asp?option=C&V=11&SessID=4280


Governance clarification

Often scientific investigators or clinicians are included on team projects with the implicit or explicit notion that they represent the needs of their larger community. "What do you guys need?" However, often the investigators are not aware of their role as a representative or its responsibilities. For example, they don't survey other investigators to identify and prioritize their needs. This governance role should be clarified.


Lessons learned

It's accepted wisdom that large team science projects (particularly involving large software components) are prone to failure. However, we've been remiss in not identifying how and why they fail and what lessons there are to be learned. We need to be able to discuss the failures of past projects in a candid, non-judgemental way so that we can learn from past mistakes. Conversely, which scientific software packages have succeeded and what can we learn from them? What did they do right? Let me ask you a question: Which 5 scientific software packages do you admire the most and why? If you asked that same question to software engineers, neuroscientists, clinicians, and NIH staff, what would the difference in answers tell you about their respective value systems?


People

Ultimately it comes down to the people, and NAMIC is blessed to have an amazing, truly world-class team of software engineers, scientists, and clinicians. As you mentioned, the judges of success for the NCBC's should also include front-line investigators like post-docs actively trying to apply the software tools in their research domains.

As the patron saint of NAMIC, Yogi Berra, said (paraphrasing), "There's no difference between theory and practice ... but that's not true in practice."