00:36:07 Betzabeth Leon: yes 00:36:13 Emre Perihan: Yes 00:36:13 Zacarias Benta: yes 00:36:14 Numan Nusret Usta: yes 00:36:17 Edixon Parraga Pinzon: yes 00:51:24 Marta Castro: Why would we want to increse the number of layers? What's the benefit? 00:53:15 Bianca De Saedeleer: Depending on the complexity of your data, you want more layers so your network can recognize multiple different patterns, that cannot be capture with only one layer for example 01:11:46 Emre Perihan: thank you 01:13:38 Martin Misson: i know about 3/10 01:13:47 Matthieu Salamone: 5/10 01:13:48 Kenneth Goossens: 5/10 01:13:48 Silvi-Maria Gurova: 3/10 01:13:49 Pablo Saavedra Garfias: 2/10 01:13:52 Gregor Žibert: 2 01:13:53 Bianca De Saedeleer: 9 01:13:53 Rui Figueira: 4/10 01:13:54 Miguel Viana: 2/10 01:13:59 Betzabeth Leon: 2 01:14:00 João Especial: 6/10 01:14:36 Primoz Godec: https://drive.google.com/drive/folders/1rX22hAZH6u8gmocqI2LRlbmWu3w13oT8?usp=sharing 01:17:24 Zacarias Benta: Not yet 01:17:28 Zacarias Benta: Still copying 01:18:39 Primoz Godec: https://drive.google.com/drive/folders/1rX22hAZH6u8gmocqI2LRlbmWu3w13oT8?usp=sharing 01:25:04 Marta Castro: I get Int64Index([0, 1, 2], dtype='int64') 01:25:17 Marta Castro: After converting the classes to integers 01:26:18 Marta Castro: Nevermind 01:27:41 Marta Castro: Everything is good 01:27:52 Marta Castro: I just runned the cell two times 01:28:07 Marta Castro: Yes yes 01:33:53 Marta Castro: We can hear 01:34:43 Zacarias Benta: WE can hear you, the external mike was never on 01:35:23 Kenneth Goossens: Is the normalization here arbitrary or is it common practice to normalize with (x-mean)/std? 01:36:10 Bianca De Saedeleer: I usually use functions from scikit-learn for preprocessing 01:38:14 Kenneth Goossens: Thanks Bianca 01:40:42 Marta Castro: Why 32? 01:41:09 Matthieu Salamone: Input 01:41:19 Betzabeth Leon: input 01:42:53 Don Winter: why are are we normalizing test data wrt train mean and std, but ignoring test mean and std? 01:43:19 Bianca De Saedeleer: @Marta, it's kind of a random choice, usually we choose a multiple of 4 01:43:29 Emre Perihan: @don we did it 01:43:30 Emre Perihan: normed_test_dataset = norm(test_dataset) 01:43:55 Don Winter: in norm(x) we only use train_stats 01:46:26 Shivani Sharma: is group picture done? 01:46:52 Bianca De Saedeleer: @shivani, no 01:56:10 Janez Povh: we will do it just beore the coffee break, in coupl eof minutes. Please remain connected 02:27:47 Luka Pavešić: The gradient method is typically susceptible to getting caught in local minima, is this a problem in this case too? 02:30:01 Zacarias Benta: Sonhos silenciosos, a sério, os meus são cheios de animação. Às vezes faço cada filme… Alguns davam obras de ficção-científica interessantes. 02:38:12 Matthieu Salamone: There is no need to save your model as an object in keras ? 02:40:48 Matthieu Salamone: for example after training, the initial variable model is changed ? 02:41:00 Matthieu Salamone: but it is a colab thing I suppose :) 02:42:47 Marta Castro: Iris-setosa 02:43:13 Pablo Saavedra Garfias: maybe a quick demostration on cross-validation? 02:48:34 Don Winter: maybe a bit off-topic, but does our way of normalization (x-mean)/std turn our dataset into a probability distribution (sum of all x_norm =1)? 02:51:40 Zacarias Benta: What is the best ration between training and testing? Does it depend on the dataset size? 02:56:52 Milena Veneva: activation 02:56:59 Zacarias Benta: activaiton 02:57:03 Marta Castro: activation function 02:57:39 Marta Castro: But why can't it be linear? 02:59:16 Milena Veneva: 1 02:59:26 Marta Castro: number of labels? 03:00:08 Luka Pavešić: A linear neural network would be just a normal eigenvalue problem, which could be solved directly but does not have the complexitz to describe the data, is that right? 03:02:08 Rok Hribar: yes 03:02:27 Rok Hribar: but not eigenvalue 03:02:28 Marta Castro: Is just one output node because we want to predict the price and is continuos variable, right? 03:02:31 Rok Hribar: singular value 03:04:11 Rok Hribar: yes, output of the model is price which is just a single number 03:07:01 Milena Veneva: Aren't you overwriting the oprimizer this way? 03:07:05 Pablo Saavedra Garfias: is it Ok to use the same variable name "optimizer"? no overwrite? 03:07:59 Milena Veneva: Normalized data? 03:09:26 Milena Veneva: fit 03:11:59 Milena Veneva: batch_size? 03:19:22 Pablo Saavedra Garfias: based on that Plot, would we conclude that about 20 epochs is enogh? 04:41:59 Milena Veneva: Completely. :) 04:43:58 Pablo Saavedra Garfias: we are predicting medv, but is the model using all the other variables to predict medv? or is there a way to see which variables are more important then others? 04:47:33 Pablo Saavedra Garfias: ok, thanks 05:04:03 Marta Castro: Why to we need a flatten vector? 05:05:29 Marta Castro: But the dimensions won't be mixed? 05:06:34 Marta Castro: Ok, thanks 05:06:45 Primoz Godec: , 05:08:18 Milena Veneva: 10 05:08:57 Don Winter: no categorical crossentropy here? 05:11:31 Marta Castro: Should we always use SparseCategoricalCrossentropy(from_logits=True) when we are doing classification, or is there any other function? 05:12:28 Bianca De Saedeleer: https://keras.io/api/losses/ 05:13:16 Marta Castro: Thanks 05:15:40 Pablo Saavedra Garfias: can you show again the model compile command please? 05:19:31 Zacarias Benta: Yes please 05:24:04 Cymon J. Cox: Try 98 05:24:12 Matthieu Salamone: example for bad prediction 800 05:24:47 Cymon J. Cox: Worse in mine 05:24:50 Marta Castro: I got shirt(coat) 05:25:21 Matthieu Salamone: it does look like an ankle boot tho 05:33:14 Zacarias Benta: How to you choose the mask? 05:36:52 Zacarias Benta: Ok, I got it, so you have to define 5 difference masks. 05:37:36 Zacarias Benta: Ok, got it, is is calculated during learning process. 05:42:06 Cymon J. Cox: Why Conv2D and not one of the alternatives? 05:44:01 Milena Veneva: What's the difference between MaxPool2D and MaxPooling2D? 05:44:31 Milena Veneva: :D 05:44:36 Milena Veneva: Okay, thanks! :) 05:44:40 Cymon J. Cox: LOL oK, thanks 05:46:32 Zacarias Benta: The second Conv2D layer is added why, to validate the imagresizing? 05:46:38 Zacarias Benta: * image resizing 05:47:47 Zacarias Benta: Ok, it’s a trial and error process. 05:48:21 Zacarias Benta: Until you get to the optimum model. 05:49:02 Milena Veneva: overlap 05:50:31 Bianca De Saedeleer: You could set a padding for the edges 05:57:21 Milena Veneva: I like the suggestion "search stack overflow" just below the error. :D :D :D 06:00:26 Cymon J. Cox: Can you go back to the expand_dims command for a second? 06:00:51 Cymon J. Cox: Yeah, that didnt work for me 06:01:25 Cymon J. Cox: Not workding 06:01:32 Zacarias Benta: My mode still crashes 06:01:35 Cymon J. Cox: ValueError: Negative dimension size caused by subtracting 3 from 1 for '{{node sequential_6/conv2d_2/Conv2D/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](sequential_6/conv2d_2/Conv2D/Reshape, sequential_6/conv2d_2/Conv2D/Conv2D/ReadVariableOp)' with input shapes: [25088,1,1,1], [3,3,1,16]. 06:02:35 Cymon J. Cox: Ah! 06:02:37 Cymon J. Cox: Yep 06:02:59 Cymon J. Cox: (60000, 28, 28, 1, 1, 1, 1) (10000, 28, 28, 1, 1, 1, 1) 06:03:28 Marta Castro: But why do we have to add a new dimension to a 2D image anyway? 06:03:55 Cymon J. Cox: Thanks, working 06:07:19 Milena Veneva: typo 06:10:42 Martin Misson: break untill 15:10 if anyone missed it 06:11:19 Betzabeth Leon: thanks 06:29:39 Marta Castro: For this image, did we have better predictions using convolution? 06:29:53 Marta Castro: (the 2D image I mean) 06:33:49 Milena Veneva: What about the preprocessing? 06:33:57 Milena Veneva: Is it the same? 06:59:57 Milena Veneva: You have swapped the da/dx2 and db/dx2. 07:47:16 Milena Veneva: Thanks! :) 07:47:34 Betzabeth Leon: Thank you very much! 07:47:35 Don Winter: thanks, really interesting stuff 07:47:35 Emre Perihan: thank you! 07:47:41 Edixon Parraga Pinzon: Thanks! 07:47:41 Leon Bogdanovic: Thanks! 07:47:46 João Especial: Fab lecture! Congrats 07:47:52 FABIANA MIRANDA: Thanks! really interesting 07:48:09 Matthieu Salamone: Great Lecture & great autumn course ! 07:48:11 Pablo Saavedra Garfias: Great lectures... thank you!! 07:48:13 Marta Castro: Thanks 07:48:28 Miguel: Thanks 07:48:28 Emre Perihan: see you :) 07:48:28 Gregor Žibert: Thank you, very interesting, bye! 07:48:29 Miguel: bye 07:48:30 Betzabeth Leon: bye 07:48:30 Zacarias Benta: Thanks 07:48:32 Zacarias Benta: bye 07:48:34 Gianluca De Moro: Thanks bye 07:48:35 Nuno Agostinho: bye, thanks! 07:48:38 Zacarias Benta: See you next year 07:48:38 Bianca De Saedeleer: thank youè 07:48:59 Edixon Parraga Pinzon: bye 07:49:37 Rita Belo: thanks!! 07:49:42 Rui Figueira: thank you