qntmfred retired 40723 Posts user info edit post |
the one i just posted worked. turns out to get what i wanted from the real db, i only had to use one of the nested selects too. thanks everybody who helped 5/23/2008 12:14:27 PM |
evan All American 27701 Posts user info edit post |
Quote : | "How do you copy/rename an InnoDB database with relational restraints (with MySQL)? If I use phpmyadmin, it just tries to (in alphabetical order) copy table structure, copy table data, move to next table. Usually this fails very early with a foreign key constraint error. I had to manually copy each table over in order so that the restraints would hold up, but surely there is better way. Does anyone know it?" |
set foreign_key_checks=05/23/2008 12:24:51 PM |
DirtyMonkey All American 4269 Posts user info edit post |
awesome. thanks! 5/23/2008 12:48:26 PM |
qntmfred retired 40723 Posts user info edit post |
anybody have any experience storing xml in a mssql db? i just got added to a new team and they are storing complex objects as xml in the db. i talked to the guy responsible and he says there is support by sql server for this type of thing. clearly it can be done but i'm not convinced it should be done. it just feels really dirty and lazy to me. does this actually work well? 7/9/2008 10:49:25 AM |
DirtyMonkey All American 4269 Posts user info edit post |
i can't give specific advice on storing xml in a database, but i have tens of thousands of rows of serialized php objects stored in a database.
we've got a bunch of real estate listings from multiple organizations, so we normalize the data in our own format. a single listing's data is spread across dozens of tables so rather than execute several queries with complex joins, we just build the object once, store it in a "cache" table, and reference it from there when it needs to be read. it only rebuilds if the listing changes. and then of course we store it in APC when a request comes in to reduce db hits some more, but that's getting off topic.
anyway, it's not XML, but it seems to work pretty well for us - at least it reduces db hits dramatically. 7/9/2008 3:02:24 PM |
GotYoNacho Veteran 280 Posts user info edit post |
I need do the following
I have table as shown
id | group_id | name | colB | colC 1 5 computerA x x 2 5 computerB x x 3 5 computerC x x 4 5 computerD x x 5 7 computerA x x 6 7 computerB x x 7 8 computerA x x
I have a list of names that refer to the values that would be in name col (computerD, computerE, computerF)
I want to add records for (computerD,computerE,computerF) where group_id is (5,7) and they dont' already exist
All the columns will be the same except for the name col (and the unique id col)
So end result would be
id | group_id | name | colB | colC 1 5 computerA x x 2 5 computerB x x 3 5 computerC x x 4 5 computerD x x 5 7 computerA x x 6 7 computerB x x 7 8 computerA x x 8 5 computerE x x 9 5 computerF x x 10 7 computerD x x 11 7 computerE x x 12 7 computerF x x
[Edited on August 22, 2008 at 11:09 AM. Reason : rename] 8/22/2008 10:50:21 AM |
synapse play so hard 60935 Posts user info edit post |
plz to use something else besides A,B,C,1,2,3,X in your example
wheres the list of values stored?] 8/22/2008 10:59:51 AM |
tsavla All American 6787 Posts user info edit post |
why could you use simple insert statement? 8/22/2008 11:12:05 AM |
GotYoNacho Veteran 280 Posts user info edit post |
list of values are stored in a C# variable.
i basically need to add/remove computer names to all the associated group_ids needed
so originally i have a group (lets say group5) with computers 1, 2, 3
i update the list of computers to be 1, 3, 5 (in my c# program)
i want to update the table to add a record for computer5 with group5, then remove the instance of computer2 with group5 8/22/2008 11:13:32 AM |
qntmfred retired 40723 Posts user info edit post |
i have the following table on a mysql db
requests -------- id (int) userid (int) timestamp (datetime) date (date)
about 30M rows. the most common query is SELECT min(date) FROM requests WHERE userid=$userid and sometimes SELECT date FROM requests WHERE userid=$userid ORDER BY date ASC LIMIT 1 (which the server should use the same path for right?)
it's taking about 3 minutes to execute. i have an index on userid_date, but the cardinality is still about 2M on that index. what can i do to make this faster?
[Edited on January 6, 2009 at 11:01 AM. Reason : 2 queries do the same thing] 1/6/2009 10:59:48 AM |
Stein All American 19842 Posts user info edit post |
Throw an EXPLAIN in front of the queries and post the results here. 1/6/2009 2:08:18 PM |
philihp All American 8349 Posts user info edit post |
what is your ratio of selects to inserts? 1/6/2009 2:37:59 PM |
disco_stu All American 7436 Posts user info edit post |
What Stein said. Also read this: http://www.databasejournal.com/features/mysql/article.php/1382791/Optimizing-MySQL-Queries-and-Indexes.htm Particularly page 3.
EXPLAIN will show how well your index is working, if at all. 1/6/2009 2:42:49 PM |
synapse play so hard 60935 Posts user info edit post |
how can i check for a leading space in a sql query (in the where clause....)
i know i can use ltrim to fix it but first i want identify them with a query 1/6/2009 3:23:17 PM |
disco_stu All American 7436 Posts user info edit post |
Quote : | "how can i check for a leading space in a sql query (in the where clause....)
i know i can use ltrim to fix it but first i want identify them with a query" |
This doesn't make sense from a SQL point of view. Before the query is executed, the server doesn't have any data about it. You'd have to do this in whatever you're running the query from (i.e, javascript or c# or CTRL+F in notepad). If there is a way to pre-scrub queries with SQL, I'd like to hear about it.1/6/2009 3:38:49 PM |
qntmfred retired 40723 Posts user info edit post |
^^^ i saw that article too. the explain showed it was doing a full scan, i don't understand why. i'm gonna drop the indices and recreate them.
also, i was doing a bulk insert using LOAD DATE INFILE and that was also taking forever so i adjusted a bulk_insert_buffer variable and ran DISABLE KEYS before doing the LOAD command. didn't seem to make it any quicker, but i can't remember if i did the ENABLE KEYS afterwards. what command could i use to show whether it's disabled or enabled? 1/6/2009 4:26:54 PM |
Stein All American 19842 Posts user info edit post |
Just run ENABLE KEYS again.
Worst case scenario, you get an error saying they're already on
(I think, note that I've never done this) 1/6/2009 4:29:22 PM |
synapse play so hard 60935 Posts user info edit post |
^^^
this is how i did it
select * from stores where name like ' %'
simple enough, just couldn't think of it earlier 1/6/2009 4:31:04 PM |
disco_stu All American 7436 Posts user info edit post |
Oh, I thought this is what you meant:
I'm looking for 'dog'.
However, if someone puts in ' dog', I still only want to find records that match 'dog'.
----------------------------------------------- What your saying is, I'm looking for records that start with a blank space. You're then going to do something with this result set to modify how you query the same datasource again? 1/6/2009 4:41:14 PM |
qntmfred retired 40723 Posts user info edit post |
^^^ i did that. it took 30 minutes to finish. that's why i wanted to be able to view the enabled or disabled status before starting a 30 minute process. 1/6/2009 4:43:08 PM |
confusi0n All American 5076 Posts user info edit post |
qtmfred: sounds like a giant wtf to me
think out of the box - keep a running table of min date for given user 1/6/2009 4:59:26 PM |
Stein All American 19842 Posts user info edit post |
You could also (presumably) turn:
Quote : | "SELECT min(date) FROM requests WHERE userid=$userid" |
into:
Quote : | "SELECT date FROM requests WHERE userid='{$userid}' ORDER BY id ASC LIMIT 1" |
which may or may not be faster. I've always found speeds related to MIN() and MAX() to be annoyingly slow.
Plus, this should avoid a full table scan
[Edited on January 6, 2009 at 6:11 PM. Reason : .]
[Edited on January 6, 2009 at 6:33 PM. Reason : whoops, should be ASC]1/6/2009 6:10:08 PM |
qntmfred retired 40723 Posts user info edit post |
id isn't necessarily corresponded to date. the min date could be any record. i might try confusi0n's suggestion.
actually i'm gonna install the db locally and see if that changes anything. could be crappy dreamhost servers
[Edited on January 6, 2009 at 6:17 PM. Reason : mebbe] 1/6/2009 6:16:44 PM |
qntmfred retired 40723 Posts user info edit post |
is there ever a reason to have a primary key index be unique, non-clustered? 6/10/2009 2:58:46 PM |
ScHpEnXeL Suspended 32613 Posts user info edit post |
duh 6/10/2009 3:23:48 PM |
qntmfred retired 40723 Posts user info edit post |
my familiarity with indexes and best practices if pretty limited so...
from what i'm reading, primary key indexes are defaulted to clustered if available (which isn't necessarily the best choice), but if you do it non-clustered, it's exclusively to enforce the uniqueness of the primary key?
[Edited on June 10, 2009 at 3:30 PM. Reason : sql server 2005 btw] 6/10/2009 3:26:50 PM |
philihp All American 8349 Posts user info edit post |
^correct. and to make it easier for people reading the DDL of your tables.
for example, say you've got a USERS table, in the form
userid number primary key, username varchar2(40 chars) unique key, password char(32 bytes), created date
perhaps you most often lookup this table by username, rather than by userid. in this case you would want the table internally sorted by username. logically, applications should not care what the sort order is of the table too much, and that optimization should be handled by the DBA after they have a good idea of what frequency and manner of queries are going to be coming in under normal usage. 6/10/2009 3:34:54 PM |
qntmfred retired 40723 Posts user info edit post |
so even though in say a table with 1M rows, the unique, non-clustered index would also have 1M rows (which sounds retarded why even have the index), the way they are stored on disk makes for faster lookup than doing a full scan on the table proper?
edit: and even if it weren't faster, i guess you'd have to have the index to enforce the uniqueness, despite of the cost of doubling your row count by adding an index. is that right?
[Edited on June 10, 2009 at 4:31 PM. Reason : .] 6/10/2009 3:40:26 PM |
quagmire02 All American 44225 Posts user info edit post |
my experience with SQL is pretty limited (or, at least, simplistic) so subqueries give me a bit of a headache...how would i write a single query (with a subquery, of course) to match up table 1's name with the advisor's name in table 3? this is just an example, btw, i'm not actually doing anything with students or advisors...it was just the first thing i could think of since i'm registering for classes at the moment
table 1 +----+------+-------+ | id | name | class | +----+------+-------+ | 1 | john | 2 | | 2 | matt | 1 | | 3 | fred | 3 | +----+------+-------+ table 2+----+-------+---------+ | id | class | advisor | +----+-------+---------+ | 1 | fresh | 3 | | 2 | sopho | 1 | | 3 | junio | 2 | +----+-------+---------+ table 3+----+---------+ | id | advisor | +----+---------+ | 1 | francis | | 2 | overbey | | 3 | ericson | +----+---------+
[Edited on June 10, 2009 at 6:02 PM. Reason : oh, and happy 27k posts to me ]6/10/2009 6:01:47 PM |
A Tanzarian drip drip boom 10995 Posts user info edit post |
You don't really need subqueries for what you're asking, but
select stus_with_class.name, stus_with_class.class, table_3.advisor
from (
select table_1.name, table_2.class, table_2.advisor
from table_1
left join table_2 on table_1.class = table_2.id
) as stus_with_class
left join table_3
on stus_with_class.advisor = table_3.id; 6/10/2009 6:17:13 PM |
disco_stu All American 7436 Posts user info edit post |
Wouldn't this be simpler?
select table_1.name, table_2.class, table_3.advisor from table_1, table_2, table_3 where table_1.class = table_2.id AND table_2.advisor = table_3.id
[Edited on June 11, 2009 at 3:43 PM. Reason : code]6/11/2009 3:43:22 PM |
A Tanzarian drip drip boom 10995 Posts user info edit post |
Yes, but he asked for a subquery. 6/11/2009 6:24:53 PM |
disco_stu All American 7436 Posts user info edit post |
I thought he was assuming that he had to do it with a sub-query and didn't realize you could do it with simple joins. 6/11/2009 7:29:52 PM |
quagmire02 All American 44225 Posts user info edit post |
nah, i wanted to see the subquery version because i've never really done any before and i wanted to take something i already knew how to do and subquery it to see the interaction
does that make sense? probably not
btw, thanks to A Tanzarian 6/12/2009 8:40:15 AM |
Novicane All American 15416 Posts user info edit post |
I can't figure out whats up with this query:
SELECT bhBackhoe , (SELECT COUNT(bhID) FROM orders LEFT JOIN order_items ON oiOrderID = oID WHERE oiAllIncluded = 'yes' AND bhID = oiBackhoe AND oiBackhoeMounted = 'yes') AS OrderNumber , (SELECT COUNT(bhID) FROM orders LEFT JOIN order_items ON oiOrderID = oID WHERE oiAllIncluded = 'no' AND bhID = oiBackhoe AND oiBackhoeMounted = 'yes') AS OrderNumber2 FROM backhoes ORDER BY bhBackhoe ASC
In plain english this is suppose to return a list of backhoes models in one column. The other two column "OrderNumber" and "OrderNumber2" look through the Orders and matches all the order items up with the orders. It then checks to see if a backhoe was mounted, included, and mounted were equal to yes. the other column is if the backhoe was not included.
Once it finds it, it is suppose to count the backhoe model. So I end up with a total of that specific models that were ordered as included and not included.
The query works but it only returns one row for me and both OrderNumber1 and OrderNumber2 fields say "null".7/31/2009 8:20:46 AM |
Novicane All American 15416 Posts user info edit post |
well, got it working, for some strange reason if i change
(SELECT COUNT(bhID)
to
(SELECT COUNT(*)
it works.7/31/2009 10:20:47 AM |
Novicane All American 15416 Posts user info edit post |
How can I add another column conditional to this query? right now, it sums everything up.
SELECT carName, COUNT(carID) AS OrderNumber FROM orders LEFT JOIN order_items ON OrderID = oID LEFT JOIN car ON carID = trID WHERE sold = 'yes' GROUP BY carID
so i have all the cars and it counts all that we sold for each car.
I'd like to have a OrderNumber2 column that has its on conditional of finding the orders from last year only.8/11/2009 9:54:07 AM |
Novicane All American 15416 Posts user info edit post |
nevermind, i got it [/sqlblog] 8/11/2009 10:29:05 AM |
qntmfred retired 40723 Posts user info edit post |
out of curiousity, what did you end up doing? 8/12/2009 9:14:52 AM |
Novicane All American 15416 Posts user info edit post |
using the SELECT with parenthesis let me break into as many columns as I wanted to. While I liked knowing the total sold, i wanted to break it down into two columns 2008 and 2009. i guess when next year rolls around, i'll just add another select statement. I'm sure i could automate this with PHP throughout the years, but it'll be ok.
SELECT carName ,(SELECT COUNT(*) FROM orders LEFT JOIN order_items ON oiOrderID = oID WHERE sold = 'yes' AND orders.DateAdded >= '2008-12-31 00:00:00') AS Orders2009 ,(SELECT COUNT(*) FROM orders LEFT JOIN order_items ON oiOrderID = oID WHERE sold = 'yes' AND orders.DateAdded <= '2008-12-31 00:00:00') AS Orders2008 FROM cars ORDER BY carHorsePower ASC
[Edited on August 12, 2009 at 1:34 PM. Reason : terribad format]8/12/2009 1:31:38 PM |
qntmfred retired 40723 Posts user info edit post |
ok. I was going to suggest a subquery, I wasn't sure if there was a more efficient way to do it though 8/12/2009 1:55:35 PM |
1985 All American 2175 Posts user info edit post |
K, i wrote this query to search our databases for a column name, It used to run in a reasonable time, but now, as we've added more databases and tables, it slows down (naturally). There must be a more efficient way out there?
ALTER PROCEDURE [dbo].[FindColumn] @Search varchar(200) AS
SET NOCOUNT ON
SET @Search = '%' + @Search + '%'
create table #tablemap ( DBName varchar(128), TableName varchar(128), ColumnName varchar(128) )
exec sp_MSforeachdb
'IF ''^'' NOT IN (''tempdb'', ''msdb'', ''model'', ''master'')
BEGIN
declare @RETURN_VALUE int
declare @command1 nvarchar(2000)
declare @whereand nvarchar(2000)
set @command1 =
''insert into #tablemap
select a.DBOName, b.tableName, b.name from
((select db_name() as DBOName)
as a
cross join
(select name, tableName from
((select name from syscolumns where id=object_id("?"))
as one
cross join
(select top 1 "?" as tableName from ?)
as two))
as b ) ''
use ^
exec @RETURN_VALUE = sp_MSforeachtable @command1=@command1
END',
@replacechar = '^'
select * from #tablemap
WHERE columnName LIKE @search
drop table #tablemap
[Edited on August 27, 2009 at 7:11 PM. Reason : .]
[Edited on August 27, 2009 at 7:12 PM. Reason : ..]
[Edited on August 27, 2009 at 7:22 PM. Reason : asdf] 8/27/2009 7:10:44 PM |
Wolfmarsh What? 5975 Posts user info edit post |
I have an interesting one, it is kind of like qntm's issue with the stores and the fruit, and im suprised this problem didnt come up with it.
I have a table like this:
APPLE 0.50 FRUIT PEAR 1.00 FRUIT GRAPE 2.00 FRUIT CARROT 0.75 VEGETABLE CORN 5.00 VEGETABLE POTATO 0.75 VEGETABLE
The outcome I want is the cheapest of each category, which is a pretty easy query to write, but the problem is that both CARROT and POTATO have the same cost, so the result I am getting is:
APPLE FRUIT CARROT VEGETABLE POTATO VEGETABLE
I need it to just grab 1 from each group (FRUIT/VEGETABLE), so the results I want are either:
APPLE FRUIT CARROT VEGETABLE
or
APPLE FRUIT POTATO VEGETABLE
I don't care which.
Thoughts?10/23/2009 2:22:07 PM |
BigMan157 no u 103354 Posts user info edit post |
LIMIT 2? 10/23/2009 2:27:39 PM |
qntmfred retired 40723 Posts user info edit post |
sounds like adding a TOP 1 in there somewhere might help. what's the query you currently have? 10/23/2009 2:28:21 PM |
Wolfmarsh What? 5975 Posts user info edit post |
SELECT [FOOD].* FROM [FOOD] LEFT OUTER JOIN [FOOD] As [CHEAPEST] ON [CHEAPEST].[CATEGORY]=[FOOD].[CATEGORY] AND [CHEAPEST].[COST] < [FOOD].[COST] WHERE [CHEAPEST].[COST] IS NULL
This basically finds the row where nothing is cheaper than another food from the same category. 10/23/2009 2:31:08 PM |
BigMan157 no u 103354 Posts user info edit post |
throw a GROUP BY [FOOD].[CATEGORY] onto the end of that, see if that does it 10/23/2009 2:43:42 PM |
qntmfred retired 40723 Posts user info edit post |
if you do a group by category, you can't select on the other fields 10/23/2009 2:48:02 PM |
Wolfmarsh What? 5975 Posts user info edit post |
^ Bingo, hence the problem. 10/23/2009 3:01:50 PM |
Wolfmarsh What? 5975 Posts user info edit post |
Got it, although I think it could be better
SELECT MIN([NAME]), [CATEGORY] FROM ( SELECT [FOOD].* FROM [FOOD] LEFT OUTER JOIN [FOOD] As [CHEAPEST] ON [CHEAPEST].[CATEGORY]=[FOOD].[CATEGORY] AND [CHEAPEST].[COST] < [FOOD].[COST] WHERE [CHEAPEST].[COST] IS NULL ) [LIST] GROUP BY [CATEGORY]
Wrapping the entire thing in another select that reduces the list once again took care of it. 10/23/2009 3:20:04 PM |