Results 1 to 5 of 5

Thread: Blueprint: Bootloader integration testing

  1. #1
    Join Date
    Mar 2007
    Location
    Caprica
    Beans
    2,002
    Distro
    Ubuntu Development Release

    Blueprint: Bootloader integration testing

    There's another interesting Blueprint here.

    Just as the one I posted before (about Jockey and drivers), this one is also on something we've been talking for ages. There's some potential for improvement.

    Regards,
    Effenberg
    Last edited by effenberg0x0; May 15th, 2012 at 01:59 PM.

  2. #2
    Join Date
    May 2008
    Beans
    1,438
    Distro
    Ubuntu Development Release

    Re: Blueprint: Bootloader integration testing

    This cycle is going to be interesting

  3. #3
    Join Date
    Jun 2010
    Location
    London, England
    Beans
    7,525
    Distro
    Ubuntu Development Release

    Re: Blueprint: Bootloader integration testing

    It looks to me that a lot of development time is going into producing automated tests. Perhaps Canonical will sell its services as a software testing lab.

    Regards.
    It is a machine. It is more stupid than we are. It will not stop us from doing stupid things.
    Ubuntu user #33,200. Linux user #530,530


  4. #4
    Join Date
    Mar 2007
    Location
    Caprica
    Beans
    2,002
    Distro
    Ubuntu Development Release

    Re: Blueprint: Bootloader integration testing

    Quote Originally Posted by grahammechanical View Post
    It looks to me that a lot of development time is going into producing automated tests. Perhaps Canonical will sell its services as a software testing lab.

    Regards.
    Indeed. I think it's a know fact that automated testing has some known problems (or limitations):

    - It's impossible to test an entire OS
    -- The time and effort involved in writing enough test scripts to test a whole OS, its applications, usage scenarios, would demand a large dedicated team. It's not viable for most software companies.
    -- Some things just can be automated;
    -- Automation frequently fails to detect things human senses do;
    - Test scripts break easily when software is updated;
    - Even if you eventually reach a point in which a large part of the OS is being tested with automation, it's hard and time consuming to keep test scripts updated;
    - Automated testing results are purely quantitative. It has no qualitative info in most cases.

    I think all software companies go through this phase, deploy many automated solutions, tried to have a huge pool of hardware in a lab to run the automated tests, etc. From a business point of view, it looks like a less expensive option than investing in (and dealing with) real people.

    In my opinion, this strategy breaks down in a couple releases. If there was a bulletproof, almost magic, solution to allow software companies to drop humans and rely exclusive on automation (reaching similar or better results), make no mistake Microsoft, IBM, Oracle, SAP, etc software would have no bugs. They have the resources to deploy all existing automated testing solutions and even create their own.

    The truth is that you can't have good software without investing on knowledge and training for human testers (employees - in the case of proprietary software, or community - in the case of open source). Nothing beats the human brain (yet).

    Now, looking from a positive perspective, we do have room for automation: We know Ubuntu still gives a lot of importance to testing the installer and first boot (ISO-Testing). Although I can't see the point in ISO-Testing (versus package/application in-depth testing), ISO-Testing can and *should* be completely automated, freeing the community of testers from this process. I'm all for ISO-Testing automation, instead of human mechanization processes. I'd love to see all ISO-Testing automated ASAP.

    Regards,
    Effenberg

  5. #5
    Join Date
    Mar 2006
    Beans
    4,373
    Distro
    Ubuntu Development Release

    Re: Blueprint: Bootloader integration testing

    The reason automated testing breaks down is that no set of automated tests no mater how comprehensive can anticipate all of the possible uses ( and misuses ) that all potential users will put the system being tested to , and that applies not just to software .
    if it ain't broke you haven't tweaked it enough

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •