Software Development Practices
This document outlines and provides guidance on the software development processes and practices used by 4R Digital. These are split into two main areas:
Software Development Lifecycles processes, including:
Coding principles and best practices
Security: the team review and monitor the OWASP recommendations. We will be conducting our own internal security testing and will conduct a security audit with an external supplier (Pen Testing). We are in the process of configure Snyk static security analysis and have been using Defender for Cloud to provide platform protection.
Agile Process
We operate a scrum process within the team, involving the following ceremonies:
PI Planning - Every 2 months (approximately), we run our Project Iteration (PI) planning. In these sessions, we look ahead across 3-4 sprints, grooming the backlog, estimating and prioritising features for the sprints ahead, in order to meet our project milestones.
Daily standups - at which we review the Devops board and track progress. The idea is to work out as a team the best way to tackle the outstanding work.
End of sprint demo's - We try to take this in turns, with one dev taking responsibility for the demo of each system area and all work for the team.
Sprint Planning and Estimation - we do this as a team at the start of each sprint (normally a 2 week cycle).
Sprint Retros (optional) - from time to time we have a sprint retro to drive improvement, this is something we would like to do more often.
For more information, please see our Product Manual.
Source Control
We are using Git in Azure Devops for our source control. As we are a small team, we have decided to take a Monorepo approach, so that we can easily share c# class libraries and tightly link all the code that goes into a release. Please see below some guidance for using our Source Control:
Create and work on a feature branch for each feature. Use the following naming strategy for your branch:
We are following Microsoft's recommended approach for our Branching Strategy
Avoid using just the US/PBI number as the branch name as it is not intuitive to humans.
Ensure you push your changes to your remote branch at least once per day. Don't worry about dirty commits as they will get squashed eventually.
When your changes are complete, and you have added tests, raise a Pull Request, select one or two reviewers (you don't have to add the whole team) and attach the User Story.
Include a brief description with the PR, mentioning what is being delivered to help the reviewers.
Engage constructively in PR Code Reviews, don't just wave them through without ensuring the code follows the quality guidelines mentioned above.
Testing Strategy
To ensure the Cavex Platform is robust and meets the specified requirements, thorough testing and validation procedures have been executed. This section details the methodologies employed and the outcomes, ensuring that the deliverables are ready for deployment and align with the agreed standards.
4R Digital employs creating the acceptance criteria for features and stories using industry standard best practices that are all documented within Azure. These tests will be validated as part of the acceptance of the Feature and Story when the feature is released. Testers are specialised team members who will build these test cases to focus on exercising edge cases and exception scenarios which will be included in the Test pack.
The following table outlines are approach to testing.
Item | Description |
|---|---|
Test Goals / Objectives | To deliver high quality software, to be able to react to business changes quickly and with accuracy, to automate as much testing as possible, to prevent critical bugs in production |
Sprint timelines | Sprint length - 2 weeks. Items are currently released into test environment at completion of an item. Items are then promoted into internal environment at the end of the sprint |
Lifecycle of tasks/tickets | * Ticket is ready for development when it has been included in sprint planning and then Acceptance Criteria have been written by the Project Owner * At that time the Tester can go through acceptance criteria and plan testing / write some test cases / test plans * It's ready for test when developer achieved acceptance criteria * Ticket then moves to Ready for test stage * Tester starts testing * If a related bug is found, it is raised as a task against the original ticket * Bugs that are unrelated to sprint work are raised as separate tickets * Once all related bugs are tested and passed, the ticket is closed |
Aligning the team to occupy different test roles | The various types of testing and responsible parties are documented below: Test Phases and Responsible Parties |
Test tools | Test Automation tools: xUnit, Moq, Microsoft API testing Framework, Jasmine, Cypress, Postman Manual Testing tools: Manual browser testing, Scribe to record test cases, DevOps to create test cases, Azure Data Studio, SnagIt for screen grabbing and editing, Word, Excel |
Test environments | DevOps Pipelines, Staging environment, Test environment, Pilot environment |
Test monitoring and reporting | DevOps is used to record test stages and cycles. Test report is generated prior to each release. Change log is generated for each release |
Test Phases and Responsible Parties
The table below describes our test phases:
S.no | Test Phase | Description | Responsible Party | Supporting Party |
|---|---|---|---|---|
1 | Unit and component tests | Tests are written to run before and after code changes to confirm that the system works as intended. | Developer | Tester |
2 | Functional Tests | Tests for stories, features, and capabilities validate functionality works the way the customer intended. Ensured seamless communication between system modules. Promoted smooth data transfer and functionality. | Functional Tester | Product Owner |
3 | System-level Integration tests | Validate that the whole system’s behaviour meets usability and functional and Non-functional requirements. | Systems engineer | Technical Lead, Product Owner |
4 | User Accepting Testing | Performed to validate that end users can perform their tasks on new system (in real -world scenarios). Validate end to end business flow | Product Owner, Business Analyst | Cavex |
Release Strategy
We will follow a rolling release approach, at a high level our approach is:
Stage | Whats involved | Who's is involved |
|---|---|---|
Feature design | Define features / use Cases, creation of test plan. | Product Owner, Test Team, Developers |
Release Strategy | Asses what parts of the system are affected, assess what testing is needed, create a run book. Collaborate with stakeholders. | PO, Test Team, Tech Lead, Stakeholders |
Build | Develop and integrate in a feature branch, progress through test environments. | Developers / DevOps Team |
Test | Test the feature(s) in Test. | Test Team |
Source control | Promote changes to | DevOps Team |
Use Acceptance Test | Testing conducted by Product Owner and key Stakeholders. | Product Owner, Stakeholder |
Test Report | Complete and record test summary report. | Test Team |
Approval | Send test plan and release notes to key stakeholders for review and sign off. | Stakeholders |
Deploy | Deploy the feature(s) to production, execute the run book (for any non-automated steps), conduct sanity tests in Production. | DevOps Team |
Source control | Tag main branch with release number. | Developers |
Communicate | Email key stakeholders with release notes. | Product Owner |
Issue Tracking
Our issue tracking process is as follows:
Issue Process flow
Bug found and raised with support team
Reproduced in prod as well as in test environment
Bug details and reproduction steps recorded in DevOps
Discussed with BA/PO as of priority and severity of the bug and impact assessment
According to the above assigned to a dev and fixed accordingly as per priority
Deployed to test environment
Tested reproducing the exact steps as well as some exploratory testing around the issue
Regression testing depending on what the bug was and it’s severity
Retest in Pre-Production environment
Deployment to prod environment
Conduct sanity tests in production
Defect prioritisation
We prioritise our defects as follows:
Priority | MOSCOW | Definition | Example | Business Impact |
|---|---|---|---|---|
P1 | MUST HAVE | Must be fixed before final solution is acceptable. | Effects a critical function of the product and/or fails to meet documented acceptance criteria. | High |
P2 | SHOULD HAVE | Should be fixed, if possible, within the timeframe of the project | Function is noncritical but does not behave as expected it adversely affects user experience. | Moderate |
P3 | COULD HAVE | Could be fixed if time/resource permits | Feature functions as expected but UI/UX could be improved to enhance user experience. Defect does not impede progress. | Low |
P4 | WOULD HAVE | Low priority issue that would be fixed if time/resource permits | Feature functions as expected minor cosmetic changes | None |
Defect Definitions following Final Release (under warranty)
Priority | MOSCOW | Definition | Example | Business Impact |
|---|---|---|---|---|
P1 | MUST FIX | An issue that must be fixed immediately and with as many resources as is required. | Full outage or makes a critical function of the product to be unavailable for everyone, without any known workaround. | Severe |
P2 | SHOULD FIX | An issue that should be fixed on a reasonable timescale. | An issue that would otherwise be a P1 but has a reasonable workaround. An issue that is important to a large percentage of users and is connected to system functions | High |