FIN510 Assessments
Complete assessment guide and resources
1 Assessment Overview
FIN510 uses a three-component assessment structure designed to evaluate both theoretical understanding and practical implementation skills.
1.1 Assessment Summary
Component | Weight | Format | Timing | Feedback |
---|---|---|---|---|
Coursework 1 | 34% | 2 × MCQ Tests (17% each) | Weeks 5 & 12 | Following day |
Coursework 2 | 16% | Case Study Analysis | Week 7 | +2 weeks |
Coursework 3 | 50% | Python Project Report | January 14th | +3 weeks |
1.2 Coursework 1: Multiple Choice Tests (34%)
1.2.1 Test 1 - Week 5 (17%)
Coverage: Python Foundations (Weeks 1-4)
Topics Include: - Python programming fundamentals for finance - Financial data acquisition and APIs - Time series analysis basics - Statistical analysis and risk metrics - Data visualization techniques
Format: 40 minutes, closed book, via Blackboard
1.2.1.1 Sample Questions:
Question 1: Which Python method correctly calculates log returns?
(price_t / price_t-1) - 1
np.log(price_t / price_t-1)
✓
(price_t - price_t-1) / price_t-1
price_t - price_t-1
Explanation: Log returns are calculated as the natural logarithm of the price ratio, which provides better statistical properties for financial analysis.
Question 2: What does a Sharpe ratio of 1.5 indicate?
- 150% annual return
- 1.5 units of excess return per unit of risk ✓
- 1.5% volatility
- $1.50 profit per dollar invested
Explanation: The Sharpe ratio measures risk-adjusted returns by dividing excess return by volatility.
1.2.2 Test 2 - Week 12 (17%)
Coverage: AI and Advanced Applications (Weeks 6-11)
Topics Include: - Machine learning model evaluation - Natural language processing in finance - AI ethics and bias considerations - Automated trading systems - Production deployment concepts
1.3 Coursework 2: Case Study Analysis (16%)
1.3.1 Format
- Duration: 40 minutes
- Type: Open book
- Location: Computer lab
- Week: 7
1.3.2 Structure
- Part A (50%): 20 multiple choice questions
- Part B (50%): Written analysis of provided dataset
1.3.3 Sample Case Study
Scenario: You are a data scientist at a fintech startup developing a credit scoring model. You have been provided with results from three different machine learning models trained on lending data.
Your Task: Evaluate the models and recommend the best approach for production deployment.
Provided Data: - Model performance metrics (accuracy, precision, recall, AUC) - Feature importance rankings - Bias audit results - Computational requirements
Analysis Requirements: 1. Compare model performance across different metrics 2. Assess potential bias and fairness issues 3. Consider regulatory compliance requirements 4. Recommend deployment strategy with justification
1.3.3.1 Assessment Criteria:
- Technical Understanding (40%): Correct interpretation of ML metrics
- Critical Analysis (30%): Evaluation of trade-offs and limitations
- Business Application (20%): Practical deployment considerations
- Communication (10%): Clear and professional presentation
1.4 Coursework 3: Final Project (50%)
1.4.1 Project Requirements
- Report: 2,000 words maximum
- Code: Jupyter notebook with implementation
- Due Date: January 14th, 2026 by 12:00
- Submission: Turnitin + GitHub repository
1.4.2 Project Options
1.4.2.1 Option 1: AI-Powered Trading Strategy
Objective: Develop a complete algorithmic trading system
Requirements: - Implement multiple ML models for price prediction - Create backtesting framework with realistic constraints - Build performance analytics dashboard - Address risk management and regulatory considerations
Deliverables: - Trading strategy implementation - Backtesting results and analysis - Risk assessment framework - Performance comparison with benchmarks
1.4.2.2 Option 2: Financial NLP Application
Objective: Build sentiment analysis system for financial markets
Requirements: - Multi-source sentiment analysis (news, social media, earnings calls) - Correlation analysis with market movements - Real-time monitoring dashboard - Model validation and performance evaluation
Deliverables: - NLP pipeline implementation - Sentiment-price correlation analysis - Interactive monitoring dashboard - Business case and ROI analysis
1.4.2.3 Option 3: Robo-Advisory Platform
Objective: Create automated portfolio management system
Requirements: - Risk profiling questionnaire and algorithm - Modern Portfolio Theory implementation - Rebalancing and tax optimization - Client reporting and communication system
Deliverables: - Complete robo-advisory system - Portfolio optimization algorithms - Client interface and reporting - Regulatory compliance framework
1.4.2.4 Option 4: Credit Risk AI System
Objective: Develop ML-based credit assessment platform
Requirements: - Advanced feature engineering with alternative data - Multiple ML model comparison - Bias detection and fairness algorithms - Model governance and monitoring framework
Deliverables: - Credit scoring model implementation - Bias audit and mitigation strategies - Model monitoring dashboard - Regulatory compliance documentation
1.4.3 Assessment Criteria
1.4.3.1 Technical Implementation (40%)
- Code Quality: Clean, well-documented, reproducible code
- Algorithm Selection: Appropriate choice of methods and techniques
- Data Handling: Proper preprocessing and validation
- Performance: Model accuracy and computational efficiency
1.4.3.2 Business Application (25%)
- Problem Definition: Clear business case and objectives
- Implementation Strategy: Realistic deployment considerations
- Risk Assessment: Identification and mitigation of key risks
- Value Proposition: Clear articulation of business benefits
1.4.3.3 Critical Analysis (20%)
- Literature Review: Integration of relevant academic and industry sources
- Methodology Justification: Rationale for chosen approaches
- Limitations Discussion: Honest assessment of model constraints
- Alternative Approaches: Consideration of other possible solutions
1.4.3.4 Communication (15%)
- Report Quality: Professional writing and presentation
- Visualization: Effective charts, graphs, and dashboards
- Documentation: Clear technical documentation
- Referencing: Proper academic citation format
1.5 Assessment Support
1.5.1 Preparation Resources
- Sample Projects: Available on course GitHub
- Code Templates: Starter notebooks for each project type
- Rubric Details: Comprehensive marking criteria
- Past Examples: Anonymized high-quality submissions
1.5.2 Getting Help
- Office Hours: Weekly consultation sessions
- Peer Review: Structured feedback sessions
- Technical Support: Lab assistants for coding issues
- Writing Support: University writing center resources
1.5.3 Submission Guidelines
- File Naming:
SurnameFirstNameBNumber_ProjectTitle
- Code Repository: GitHub with clear README
- Report Format: Word document via Turnitin
- Late Penalties: As per university policy
1.6 Academic Integrity
1.6.1 Permitted Collaboration
- Discussion: General concepts and approaches
- Code Review: Peer feedback on implementation
- Resource Sharing: Publicly available datasets and libraries
1.6.2 Prohibited Activities
- Code Copying: Submitting someone else’s implementation
- Report Plagiarism: Copying text from any source without attribution
- Data Fabrication: Creating fake results or datasets
1.6.3 AI Tools Policy
- ChatGPT/Copilot: Permitted for learning and debugging
- Code Generation: Must be acknowledged and understood
- Report Writing: AI assistance must be disclosed
- Final Responsibility: You must understand and defend all submitted work
1.7 Grade Boundaries
Classification | Percentage Range | Description |
---|---|---|
First Class | 70-100% | Outstanding work demonstrating innovation and mastery |
Upper Second | 60-69% | Good quality work with solid understanding |
Lower Second | 50-59% | Acceptable work meeting basic requirements |
Third Class | 40-49% | Adequate work with limited demonstration of skills |
Fail | 0-39% | Insufficient demonstration of learning outcomes |
For detailed rubrics and additional support materials, see the Course Handbook.