https://www.lynda.com/SQL-Server-tutorials/Developing-Microsoft-SQL-Server-2014-Databases/383047-2.html
Right click -> Design query in Editor
is an easy way to generate queries
Right click -> Insert Snippet
inserts various code templates
View -> Template Explorer
code templates for all queries
AND
Query -> Specify Values for Template Parameters
Views
IN MEMORY
SSMS
is an easy way to generate queries
Right click -> Insert Snippet
inserts various code templates
View -> Template Explorer
code templates for all queries
AND
Query -> Specify Values for Template Parameters
Creating database diagrams
- To see the various tables and the relationships between those tables
- To document your database with a database diagram so that the other administrators can use it, or the other developers can use it to understand how the overall database works together
- To document your database with a database diagram so that the other administrators can use it, or the other developers can use it to understand how the overall database works together
Normalization
Schema
The idea of a schema is to separate out the ownership from the usage, that is, the objects within a schema can be assigned to a certain owner or to a certain group, allowing that group to be responsible for changes within the tables or the objects that are contained within that schema.
The then users of the database are completely separated out. This arose from problems we had in the past with orphaned objects
CREATE SCHEMA HumanResources
GO
ALTER SCHEMA HumanResources
TRANSFER [dbo].[Instructor]
GO
One of your tasks when you're designing your database is to properly type your attributes, which otherwise are known as columns, or actually a better word for that would be the domains.
Altering tables
Right click table name -> Design
Right click -> Generate change script
NULL can be in two states, that is, any single NULL can be, as I just said, in two states, that being unknown or not applicable. Because NULL can be in two states, NULL is not equal to NULL, which gives us all kinds of problems in the database world, and is one of the reasons why, when we're coming through and doing a final design of a database, we try everything we can to get rid of all the NULLs that we have if possible.
A check constraint enforces certain rules onto either the entire table or into a certain column
CREATE TABLE dbo.CourseLevel (
S.NO. int PRIMARY KEY,
[level] int NOT NULL,
Course_Pk int NOT NULL
CONSTRAINT FK_Course_level_Course FOREIGN KEY (Course_Pk)
REFERENCES dbo.Course (PK)
ON DELETE CASCADE
ON UPDATE CASCADE
)
SELECT c.Name, c.SUBJECT, CL.level
FROM dbo.Course as C
JOIN dbo.Course_level as CL
ON c.PK = CL.Course_PK
CREATE TABLE dbo.CourseLevel (
S.NO. int PRIMARY KEY,
[level] int NOT NULL,
Course_Pk int NOT NULL
CONSTRAINT FK_Course_level_Course FOREIGN KEY (Course_Pk)
REFERENCES dbo.Course (PK)
ON DELETE CASCADE
ON UPDATE CASCADE
)
SELECT c.Name, c.SUBJECT, CL.level
FROM dbo.Course as C
JOIN dbo.Course_level as CL
ON c.PK = CL.Course_PK
Views
Stored Procedures
- A Procedure in Computer Science is a block of code that does something. It does a "procedure," and we call these Stored Procedures because the definition of that block of code is stored within SQL.
- If you have permission to call the stored proc, you can call it from within SQL, or externally. They can accept a parameter, and they can, and often do, return a result set, and, also a status code. Here's the syntax. Create Procedure, and then the schema, Human Resources, dot, and then the name of the Stored Procedure. USP stands for User Stored Procedure, and that's just a convention, we tend to do that, and then the name of it.
- The parameters, it has two, Last Name and First Name, and they are typed of nvarchar 50's. Now the code is in the block, SET NOCOUNT ON, we don't want to see how many results come back. So, FirstName, LastName department, you can see it, from the view Employee Department History, where the FirstName is first name, LastName is last name and End Date equals Null. That is, they're still here. There's three ways we're executing it, you see the first. Just Execute and pass into parameter.
- The second we're passing it in is @LastName equals, and @FirstName equals, and then we just flipped it in the third example at the bottom. By the way, this example is straight out of Microsoft Developer Network, which is an excellent resource I would advocate that you use. The Advantages. They are stored within SQL. Because the code is stored within SQL, it is secure. So, our procedures have faster execution because on first execution, the execution plan is cached.
- On subsequent execution plans, the plan is reused. We reduce errors because the code is encapsulated. When to use Stored Procedures. If you have a two-tier system, and a two-tier system is kind of old school. Two-tier system means you've got a front end application of some type. Access, you wrote a little web application, you've got a desktop application, whatever, and you're talking directly to the database.
- In that situation, your stored procedures will implement your business rules. In an N tier, the old three-tier, and then it went to four-tier, and now it's N-tier because we have no idea where this will end up. We probably want to use Stored Procedures in a limited capacity, and I say that because when we go into N tier, the business tier, which sits in the middle between presentation and data, so the middle tier, the business tier, the middle tier, or the rules tier, whatever you want to call it, normally contains the code that will be modified as business rules change.
Therefore, the code that we encapsulate into a Stored Procedure is in the middle tier in this situation. And then, you want a limited use of Stored Procedures.
Triggers
Triggers are blocks of code just like the stored procedures that we took a look at. Now the difference, and the reason we call these triggers, is they are triggered. You got it. By a preset action. A log on or something like that. A modification, a deletion. So that action triggers this block of code.
Triggers are hidden. That is, we can look at the code, but when we fire the table, then they can cascade out and do all kinds of changes.
Triggers are, as I said, have the ability to just get away and I don't like that. There's some performance considerations.Triggers can be less effective, or that is use more resources than other ways of doing the same thing. But, that's not my main beef with them. It's the idea that they're difficult to manage. Types of triggers. A log on trigger. You log on, it fires a trigger. There's the action. A DDL trigger, or a data definition language, that's a trigger where you go in and create a table, order a table, etc.
These two types, log on and DDLs, are the less common type of trigger. The more common type is DML or data manipulation language triggers. Which is the standard crud - create, read, update, delete - and those type of triggers are by far the most common. There is a DML, data modification language, after trigger. And that fires after we have a constraint that's processed, such as the check constraints we looked at earlier in this course.
After a declarative referential action, pkfk. After an inserted and deleted tables creation. And the triggering action. So the triggering action happens, and then the after trigger fires. Here's just the brief syntax. The actual code will be after the as,but look. Create trigger, name ON table AFTER delete. So, after there's a delete, this trigger's going to fire.
DML instead of trigger. Now, I consider these the most dangerous type. But, there's also a case on the other side. People like these triggers. Now they fire before the constraint processing, in place of the triggering action, now after the inserted and deleted tables are created. Here's a little look. Create trigger instead of insert and here's a for clause if you take a look at it.
Create trigger ON table C for update as.. So, the for keyword is going to generate the trigger to fire after we do the update. Advantages of instead triggers. Now, the big one by far is to create views that are not updatable. If you remember on the views, we said a view is updatable, but only on the single base table. So, if we have a view that's looking at mod-able base tables and we want to allow a user through the view to be able to update mod-able base tables, then we would use an instead of trigger so that when they updated something the instead of trigger would fire, and it would update mod-able base tables.
Now, people do this and some people are really good at it. I, again, don't like it for the reasons I've already stated.
IN MEMORY
In-memory or PowerPivot uses memory compression techniques to get more bang for our memory buck. And when I'm talking about memory here, I'm talking about the RAM or the memory available to the server. The biggest change in in-memory.
The main areas of functionality of in-memory tables and indexes meaning the standard disbound table and index stored down on your disk is now into memory in the memory space of SQL. Native-code stored procedures which stored procedures meaning a T-SQL stored procedure that is compiled over into native-code.
So you have a table. And a standard table is bound down on the disk and you have multiple users accessing it at the same time trying to do CRUD, in a Create, Read, Updates and Deletes pounding against the database. So for our updates and our deletes and our modification, we have to lock the table. Because User A comes in and they're getting ready to do a delete so that's a begin transaction.
Now we apply a lock to the table. And then the transaction processes, but while that transaction is processing, another user wants to come in and do a delete or an update or something like that. The SQL system holds that second user and there's all kinds of locking algorithms and actually there's one locking algorithm, but all kinds of locks. So that's what a lock is. So now that one user is held back, then the lock's released by the first user, then the second user gets it, then back and forth and this pounds these tables in here and matter of fact, that's one of the reasons you'll see is very common to have a database out there that's read-only so people can come in just do the reads and then another database that we come in where we do our inserts and our deletes and then we sync them every so often or whatever.
So now with in-memory, we have a lock-free design and you say, "Would that not be chaos?" And I say, "That occurred to me too." But in a minute, I'm going to explain to you why it is not. We now use optimistic multiversion concurrency. What is concurrency? Concurrency is the phenomenon that I just addressed. Many, many users read, need to update, need to delete and modify a table.
Because these processes need to be lined up, they want concurrent access, but they need to lined up to prevent a change such as, will read incorrect information or update information that's no longer there, etc, etc. This phenomenon is known as concurrency. Optimistic concurrency, such as, we're going to describe in our locking act architecture, is called optimistic because Microsoft has developed an algorithm to allow us to work with outlocks.
And to do that, Microsoft uses a two-step validation process. By the way, before Microsoft has released this into the wild, which is us, we're SQL Server 2014, they did extensive testing and you can hit Microsoft's site and find the white papers they did it they did on this study as far as determining whether or not optimistic concurrency made sense. Oh, another little thing I guess I should throw in here is why Microsoft would do that? Well you see the problem lies with Moore's law that is it has slowed down.
And also, the optimizations that are available in algorithms to make table access, concurrency, stored procedures, etc. faster, combined with the slowing down of these increase of speeds of processors means that we are no long able to say,just "Well, throw in a faster processor or add more memory into it and get it to going faster." It just doesn't work anymore.So Microsoft did this study and came up with this optimistic concurrency and said, "Aha, if we move it off of the hard driveand into memory, will it be faster?" Why, of course.
Comments
Post a Comment