Thursday, February 05, 2026

The Hottest Job Yahoo Article

 Fantastic Article - worth the read.    


“Do what you love and you’ll never want for anything” was the invitation that pulled me into a career that has run from token ring networks to self‑healing agentic AI bots. That arc is the point: in this field, you are not signing up to learn a stack; you are signing up to live inside permanent change. Skills and learning are not decor on the side of the job; they are the job.

AI doesn’t reverse that deal, it enforces it. It strips out routine work, compresses entire roles into slivers of judgment, and raises the bar on what “senior” actually means. The only durable edge is how quickly you can retool and move up the abstraction ladder—architecture, risk, design, governance—while the tools below you keep shifting. In that sense, AI is not your replacement; it is your latest exam.

https://finance.yahoo.com/news/hottest-job-tech-writing-words-091701073.html


Tuesday, January 27, 2026

The Coming Storm: Why Banks’ Model Risk Management Is Struggling with GenAI

The Coming Storm: Why Banks’ Model Risk Management Is Struggling with GenAI

How AI, Regulation, and Complexity Are Outpacing Traditional SR 11-7 Programs




Banks’ SR 117 programs are running into structural limits with opaque, fastchanging, thirdparty AI—especially GenAI and agentic systems. These pain points will only intensify as AI scales across the industry.

Big Trend Lines

·      Rapid expansion of AI/ML and GenAI use cases (credit, fraud, operations, customer service, code, policy drafting) is turning “a few hundred models” into “thousands of models and AI services,” stressing inventories, validation capacity, and governance.

·      Regulators are reinterpreting SR 117 and layering on AIspecific expectations (explainability, fairness, continuous monitoring, thirdparty assurance, AI governance frameworks) rather than replacing it.

·      Firms are moving from periodic, static validation to “continuous model assurance” with near realtime monitoring, drift detection, and automated testing—often using AI to monitor AI.

Hard Problems That Are Getting Worse

1. Opaque and ThirdParty Foundation Models

·      Many critical AI capabilities now rely on external LLMs and agentic platforms (e.g., GPTstyle models), where training data, architecture, and versioning are not transparent.

·      Vendors frequently update models unilaterally, breaking reproducibility and undermining SR 117’s assumptions about fixed specifications and controlled change management.

·      Banks must attest to model risk controls over systems they neither fully understand nor control, including data handling and security inside thirdparty AI platforms.

Why it will worsen: As more workflows embed external GenAI (copilots for bankers, chatbots, automated coding, decision support), banks’ critical paths will hinge on blackbox models whose behavior can shift overnight.

2. Explainability, Fairness, and Regulatory Scrutiny

·      Deep ML and GenAI models are inherently hard to explain to business owners, boards, auditors, and regulators. Standard SR 117 “conceptual soundness” and outcome testing do not fully answer “why did this particular decision happen?”

·      Regulators expect robust bias and disparateimpact analysis across sensitive attributes—technically challenging with complex features and nondeterministic LLM outputs.

·      Explainability tools (like SHAP, LIME) help but are expensive, approximate, and difficult to scale, especially for generative models.

Why it will worsen: AI is increasingly used in highstakes decisions (pricing, collections, underwriting, surveillance), raising expectations for individualized explanations and fairness proof—not just aggregate statistics.

3. Continuous Drift, Instability, and Behavior Under Attack

·      AI models drift faster as data, markets, and user behavior change, and as vendors silently retrain foundation models.

·      SR 117’s periodic validation cadence is out of sync with systems whose risk profile can change weekly. Firms are trying to deploy realtime monitoring, but coverage is uneven.

·      Generative models are vulnerable to adversarial prompts, jailbreaks, and promptinjection attacks that can bypass business rules or generate noncompliant content—risks traditional validation never anticipated.

Why it will worsen: As agentic AI chains tools and actions, singleprompt exploits can cascade across systems; drift and adversarial behavior will be continuous, not episodic.

4. Defining “What Is a Model” and Managing Proliferation

·      Banks are struggling to decide what falls under SR 117: small decision engines, RPA scripts, GenAI assistants, scoring APIs, inapp recommendation engines, and “shadow AI” built by business units.

·      Model inventories and tiering schemes break down when hundreds of lowcode/nocode apps, spreadsheets, and AI microservices all arguably qualify as models.

·      Controlling EUCs and “citizenbuilt” AI (e.g., staff wiring Excel to public LLMs) is increasingly difficult, creating blind spots in model risk and dataloss risk.

Why it will worsen: GenAI tools make it trivial for nontechnical staff to build quasimodels. Governance frameworks will be chasing an ever-expanding perimeter.

5. Data Governance, Privacy, and Security Across the Model Estate

·      AI models consume and sometimes embed highly sensitive data; with many models and pipelines, the aggregate “model data estate” becomes a major attack surface.

·      Public and vendorhosted LLMs raise questions about where prompts, logs, and training data reside, how long they are retained, and whether they might leak proprietary or customer information.

·      Aligning model risk, operational risk, cybersecurity, privacy, and dataresidency obligations into one coherent control set is proving difficult, especially across jurisdictions.

Why it will worsen: More models, more jurisdictions, more data types, and more crossborder cloud/AI services mean the data governance and DLP problem grows superlinearly.

6. Capacity, Skills, and Automation in MRM

·      Traditional MRM teams are now expected to cover ML, GenAI, cybersecurityadjacent risks, and ethics/AI governance—the skills gap is real.

·      Manual validation and documentation can’t keep up with the volume and velocity of AI models, driving adoption of “MRM 2.0” platforms and AIassisted validation.

·      Regulators will scrutinize any “AI that validates AI,” so firms must prove that automated validation is itself governed, tested, and explainable.

Why it will worsen: Model counts and regulatory expectations are rising faster than headcount; without aggressive automation and better processes, backlog and control gaps will grow.

Where This Is Heading

·      Governance is shifting from modelbymodel compliance to ecosystemlevel assurance: continuous monitoring across all AI systems, immutable audit trails for autonomous decisions, and integrated AI governance frameworks spanning risk, compliance, and technology.

·      Expect more explicit AI/ML guidance (OCC “responsible AI,” EBA AI guidelines, EU AI Act, Fed/OCC clarifications) that will layer on top of SR 117 rather than replace it, focusing on transparency, fairness, and crossborder consistency. 

Wednesday, January 21, 2026

The Evolution of U.S. Tariff Authority (1789-2026)

 The Evolution of U.S. Tariff Authority (1789-2026)

-ZuCom


The power to impose tariffs in the United States has undergone a dramatic transformation, shifting from an exclusive Congressional prerogative to a significant tool of executive foreign policy and industrial strategy.


Early Republic to 1930s (Congressional Dominance): From the nation's founding, the U.S. Constitution vested Congress with sole authority over tariffs, primarily for revenue generation and later for protecting nascent domestic industries. This era culminated in the disastrous Smoot-Hawley Tariff Act of 1930, a Congressional effort that led to retaliatory tariffs, a collapse in global trade, and a worsening of the Great Depression. The unwieldy and politically susceptible nature of Congressional tariff-setting became acutely apparent.




1930s to 1960s (Delegation to the Executive):The fallout from Smoot-Hawley spurred a critical reform. The Reciprocal Trade Agreements Act of 1934 marked a pivotal shift, delegating authority to the President to negotiate and implement tariff reductions without direct Congressional approval for each agreement. This move aimed to depoliticize tariff decisions, foster international cooperation, and allow for more agile responses to global trade dynamics. Subsequent legislation, like the Trade Expansion Act of 1962 (Section 232), further empowered the President by allowing tariffs to be imposed under "national security" justifications.


Late 20th Century (Free Trade & Executive Negotiation): Through the latter half of the 20th century, successive administrations (e.g., Clinton, Obama) utilized delegated executive authority primarily to lower tariffs and engage in multinational free trade agreements (e.g., NAFTA, WTO), driving average U.S. tariff rates to historic lows. The focus was on fostering global economic integration and leverage through negotiation.


2010s to 2026 (Resurgent Executive Protectionism): The 2010s saw a re-assertion of presidential power to raise tariffs, particularly under the Trump administration, often utilizing Section 232 (national security) and Section 301(unfair trade practices) to impose duties on broad categories of imports. This approach, continued into 2026, reflects a shift towards using tariffs as a direct tool for industrial policyreshoring jobs, and aggressive geopolitical leverage, including a proposed universal "reciprocal tariff." This period is characterized by the executive branch's rapid deployment of tariffs, often bypassing traditional Congressional debate, marking the most active period of executive tariff issuance in modern history.



Sunday, January 18, 2026

Identify and Ideology

 When your identity is your ideology, congratulations — you’ve officially screwed yourself.”  - George Carlin



What the Quote Means



Carlin’s point here isn’t a funny slogan — it’s a psychological and social observation:


1. Identity vs. Idea


  • An ideology is a set of beliefs or ideas you hold.
  • Your identity is who you are.
    Carlin warns that when you merge the two — when your self-worth, self-definition, and ego are built entirely around a belief system — you stop thinking and start defending.  



2. Disagreement Feels Like an Attack


  • Instead of treating disagreement as a chance to debate ideas, you perceive it as a personal insult or threat.
  • You react emotionally rather than rationally.  



3. Echo Chambers and Defensive Thinking


  • People increasingly surround themselves with others who agree, creating bubbles where everyone reinforces the same beliefs.
  • Within those bubbles, facts, logic, and humor fall away because acknowledging an error feels like losing who you are.  



4. Result: Polarization and Closed Minds


  • Rather than engaging with other perspectives, people “double down”: louder, angrier, more rigid.
  • The belief becomes sacred rather than examined.