Skip to content

Software Engineering Fundamentals Matter More Than Ever

This is the fourth post in my series about integrating AI into my .NET Web Development course at Eastern Washington University. Here are links: If AI Writes the Code, What Should We Teach?A Powerful Hope for the Future, and Software Engineering Fundamentals Matter More Than Ever. This post is different. This one is about what the students actually said when the quarter ended and I asked them to be honest.

I gave them a 14-question 1-5 rating survey and a set of open-ended questions. What follows is where we succeeded, where we fell short, and what the next iteration needs to look like.

The Numbers Tell a Clear Story

The overall average across all 14 questions was 4.375 out of 5. The highest-rated question was “I am confident that I could build and deploy a similar full-stack .NET app in a real job using AI tools” at 4.875. When your students leave a class feeling like they can do the work for real, that is a good feeling.

“AI improved app quality” and “Prepared for an AI-using workplace” both came in at 4.75. The students aren’t just confident, but they believe the quality of what they built was genuinely better because of AI, and they feel ready for a world where AI is part of the daily toolkit.

The lowest score was “Worry about over-reliance on AI” at 3.625. I probably made a mistake here in kinda switching the agree/disagree orientation. Some students barely worried. Others worried a lot. 

What Speed Actually Means

Every student cited speed as AI’s most valuable contribution. But “speed” means different things to different people.

For a student with several years of experience before the AI era, speed meant removing boilerplate so he could use his “smarter brain” for harder questions. For a student carrying a heavy course load, speed was survival: AI kept her afloat when she otherwise wouldn’t have kept pace. For another student, speed was shock and delight — “I was staggered by how quickly I could create a game and have it up.” And for yet another, speed enabled rapid UI prototyping: spinning up interfaces, testing user flows, iterating. The front end became a canvas for ideas instead of a slog through CSS.

The Honesty I Didn’t Expect

Ask students what went wrong, and you usually get polite deflections. Not this group.

One student told me flat out that under deadline pressure he drifted into “vibe coding”, letting AI make architecture decisions instead of making them himself. The software felt “disconnected” from him, and he worried that AI-generated tests might not verify real requirements. 

Another student also noticed that tests were sometimes written “with the intention of passing” rather than validating requirements. When AI generates code and tests simultaneously, both can be technically correct and semantically hollow. 

A third student described chasing .NET errors in loops â€” asking AI to fix a bug, getting a new bug, and spiraling. When she stopped following AI’s suggestion to build her Gemini prompting in the front-end and instead maintained clean separation of duties with a C# back-end service, her application became dramatically cleaner. She knew better than the AI. But the instinct to defer is real.

And one student found that AI literally fabricated a fake website inside his login page. It worked and looked right, but none of his edits were being applied to the real pages. He caught it through visual testing — actually looking at what was running. Hallucination is not an abstract risk.

What the Industry Perspective Actually Delivers

Every student commented on the industry-taught aspect of the course, and the theme was consistent: relevance. Instructors are “up to date in what’s going on,” not telling students they won’t get a job if they can’t recite esoteric syntax rules. 

There’s a tension we felt all quarter between quality and functionality, between craftsmanship and delivery. It’s the same tension every working engineer navigates daily. 

What Needs to Change

Student feedback converged on several clear themes:

Teach prompting early and explicitly. Multiple students asked for guidance on effective prompts, not as a side topic, but as a foundational skill. Prompt-first assignments (writing a detailed prompt that specifies architecture and flow before any code is generated) force intentional design before implementation. Interestingly, this is a pretty different skill from writing code. Maybe we need more writing classes to clearly and effectively communicate intent. 

Provide more scaffolding without removing freedom. Creative liberty needs to coexist with enough structure to prevent chaotic codebases. Early projects could easily devolve into large monolithic page files because no conventions were established upfront.

Be explicit about where AI falls short. AI isn’t good at everyting, specifically: Azure deployment, complex refactors, and architecture decisions. Different AIs excel at different tasks and knowing how to select agents and prompt is very important. Some areas are much more gray than others which often gives AI too much ambiguity to be helpful. 

Address tooling access and cost. GitHub Pro made a meaningful difference and not everyone could afford it. Good AI costs good money. There is a significant different in speed in quality between free and paid models. We should consider potentially standardizing on a single AI tool with a short onboarding tutorial.

Let students own their repos. One of the goals of the class was for students to build a portfolio of their work. Projects should live in individual repositories from the start, not branches of the class fork. It’s the sense of ownership that makes work meaningful.

The Words They Chose

I asked each student for one word to describe their experience with AI in this class. Some responses were: Parasitic, Empowering, Modern, Productive, Dominant, Glass-ceiling-shattering!, Very Good

Where This Goes Next

The data is clear: this is working, but we don’t have it all figured out yet. Students feel more prepared for their careers (4.625/5). They’d recommend the course (4.75/5). They’re confident they can build and ship real applications (4.875/5). But they also told me frankly where things broke down.

The next iteration should front-load prompt engineering as a first-class skill, provide better architecture and design template instruction, add guardrails around the areas where AI reliably fails, and get a better way to promote high-quality architecture, design, and code.

The creative freedom of the projects provided significant motivation as I saw more hours poured into the homework than ever before. I will definitely keep real deployments, emphasis on engineering over syntax, encouraging students to build things that don’t exist on the internet, and dad jokes.

I started this series asking, “If AI writes the code, what should we teach?” After a full quarter and honest student evaluations, I have at least a partial answer: students need to think clearly, design intentionally, prompt precisely, test ruthlessly, and maintain ownership of the systems they build.

The future of software engineering education is not about AI or not-AI. It’s about preparing humans to arrive at the workplace with the most valuable skill: thinking.